Crafty Curios

Lab tape usually comes in sharp neon colors but I’d wanted something more personal, subtle, and fun. So, I got some lovely washi tape from uguisu.


  • They are a tad less sticky than regular lab tape, but still adhere well to everything from glassware to cardboard boxes.
  • VMR lab markers write very well on them, as do Sharpies.
  • They leave no residue when peeled off, just like regular lap tape.
  • They can survive in the -4C freezer as well as 37C warm shakers. Though, if they become a little bit harder to peel if left in the warm room (still won’t leave residue). I haven’t tested the tape in -80C freezers.
  • They have beautiful patterns.


  • Washi tapes are a bit sheer. It’s harder to cover up text in the background.

Overall, I think it’s delightful to use washi tape in lab. It’s a small, neat combination of science and crafts. One look at the oxalis design, and you’ll know those plates are mine.

Here’s my approach for how to turn 2D images into 3D models.

There are web apps that will do this for you, but my approach gives you a lot more control and may produce higher quality models.

Also, it only uses free and open source software. ūüėÄ



First we start with an image. I have here the logo of Julia, obtained from Wikipedia.

Julia logo

Open the image in Inkscape


Select the image and navigate to Path > Trace Bitmap

Trace Bitmap

Adjust the Trace Bitmap settings. Trace Bitmap will turn your image into a vector. Update will show what the final output will look like. Hit OK to confirm changes.


Save the newly traced image as an SVG file.

Julia vectorized

Open Blender. Import the SVG using File > Import.


Your SVG will imported as a very small Curve object, so you’ll need to zoom in to see it.

Imported SVG

Navigate to the Curve settings.


Adjust Extrude to be 0.005 to make the object 3D.


Tada! A 3D object!

Julia 3D

Changing the object color to white might help you see it better.

Julia white

With this 3D object, you can save it in a popular 3D format or do more manipulations on it.

This method is pretty straightforward and allows you to adjust a lot of individual parameters, like bitmap tracing.

Hope it helps!

It’s been a while since I started my project to 3D print glasses frames, and I’m really excited to share the results.

3D printed glasses frame

Here is my 3D printed glasses holding real prescription lenses!

In the first iteration of this project, I took an image and created a 3D printable glasses frame, using code and Blender.

A single click is all it took to procedurally generate the 3D model.

SVG to glasses

Since that first iteration, I learned a lot more about the actual design of glasses frames and improved my algorithms.

I decided to test my algorithm by copying a pair of frames I already own.

Using an image of the front view of my glasses frame, my Blender script created this 3D model:

generated glasses

I manually added lens grooves to fit my prescription lenses.


I popped the prescription lenses out of my frame and popped them into my 3D printed frame.

popped out lenses

They fit astonishingly well.

3D printed glasses frame

To make your own, check out the previous post for instructions.

To create lens grooves for your 3D model:

  1. In Edit mode, use the Knife tool on a nosepad to create a boundary between the nosepad and the frame


  1. Use the Loop Cut and Slice tool to create three edge loops. They will be boundaries of the lens grooves.

Here’s the first loop:

Loop Cut and Slice

The next two loops will be on both sides of the middle loop.

  1. Select the middle edge loop and and scale up in the XZ direction. This will create the groove itself.

  2. Do this for both sides.

After creating the lens grooves, your glasses frame is ready for 3D printing.

3D printed glasses

Here’s a video of adding the lenses to the 3D printed glasses:

Since the lenses fit the 3D printed frame pretty well, I can say the¬†algorithm/script creates an accurate enough glasses model for the frame portion. However, there’s more work to be done to make better bridges and nosepads, since the nosepads aren’t quite large enough.¬†For now, the script creates nice prototypes.

I had a short stint running a blog for my biology-inspired projects, called Iron Chef SynBio.

Since it hasn’t been updated in years, I’ve decided to archive it, and migrate all the posts here.

There were three especially cool projects on Iron Chef SynBio:

  • ABSee
    • a Ruby gem to read DNA chromatogram (.ab1) files

Ribbon Model of Enzyme Fok1

biological circuit


video game plushie

Video Game Plushie Photo by MAKE Magazine

Earlier this year, I wrote a tutorial for MAKE Magazine, on how to create stuffed animals of video game characters. The technique took a given 3D model of the character, and its texture, and programmatically generated the sewing pattern. While I’ve written a general summary and uploaded the source code to GitHub, I’m here to write a more in-depth explanation of the mathematics that makes all this possible.

My goal of the project was to create a printable sewing pattern, that once sewn together, would approximate the starting 3D model (in this case, a video game character). The gist of my technique is to use the 3D model’s texture image file as the sewing pattern. The texture image should be able to join at its UV seams to reconstitute the original 3D shape. The initial texture image for a 3D model might not be optimized for sewing reconstruction, but can be remedied by creating a new set of UVs (with seams more optimized for sewing) from the original model. Given the original UVs and the new UVs, a transformation matrix can be calculated for each face to transform the old texture image into a new optimized texture image. The resolution of the sewn reconstruction depends on location of the seams and the amount of distortion by the UV unwrapping algorithm.

As described in the general summary, a 3D model is composed of a couple of different features. It has vertices, edges, and faces that define its 3D shape. It also has a set of UV coordinates, that define how texture is projected onto each face. Lastly, it has the texture image that define how the 3D model is colored.

components of a 3D model

UV mapping, the process of projecting the 3D face onto a 2D texture surface, is pretty well studied in the world of computer graphics. Each face of the 3D model is mapped to a face on the UV map. Each face on the UV map corresponds to one face on the 3D model, and the UV preserves the edge relationships between faces of the 3D model. Yuki Igarashi, Ph.D from the University of Tsukuba realized this feature of UVs, and used them to create sewing patterns from dynamically created 3D models in her papers Plushie: An Interactive Design System for Plush Toys (SIGGRAPH 2006) and Pillow: Interactive Flattening of a 3D Model for Plush Toy Design (SIGGRAPH 2007). The specific algorithm of her UV mapping was ABF++.

Since UV maps can be used as sewing patterns, so can the texture image, since UVs map the texture image onto the 3D model. The texture can be printed onto fabric, and the resulting stuffed animal sewn from the pattern will preserve coloring information of the original 3D model.

However, not every UV map is optimized for sewing pattern creation. As you can see above, UVs are folded onto of each other, so the the and body are halved. This is a popular technique in video game graphics to save space. The head is also much bigger than the body, so that the head will appear to have finer details in the video game. These optimizations are ill-suited for sewing patterns because we want the body to be roughly the same proportions in 3D space as 2D UV space.

Differences in Final Resolution

Differences in Final Resolution, from Pillow by Igarashi

The seams of the UV clusters will become seams on the final stuffed animal. Starting with the same 3D model, the seam placement will determine the resolution of of the final sewn creation.

The initial UVs for my model are not suited for stuffed animal creation, so I made my own UVs, to optimize them for sewing. Most modern 3D graphics software will have UV mapping functionality (Maya, Blender, 3ds Max, etc). For my project, I used UVLayout, a specialized UV mapping tool, but as seen in the MAKE Magazine article, Blender works just as well.

UV maps

A Portion of My Final UV Map

Armed with my newly minted UV map, I want to create a new texture map that correspond to it, to be printed as my final sewing pattern. Here is where linear algebra comes in handy.

The polygon faces on the UV map are broken down into triangles. Each triangle face on the old, original UV maps to a triangle on the new UV, through their relationship with the same face on the 3D model. Since the two triangles are represent the same shape, but with different coordinates on the UV map, a transformation matrix can be calculated between the two triangles. Triangles are used because we want to work with square matrices for computation. That transformation matrix can be used to transform the corresponding triangular area on the old texture to color a new triangular area on the new texture. Stackoverflow has a good explanation of how to compute that transformation matrix given the coordinates of two triangles and a useful code snippet which I used.

If you compute the transformation matrix for each UV triangle and transform its corresponding texture triangle, the end result will be a new texture. If you apply the new texture and the new UVs to the original 3D model, there should be no difference in its visual appearance.

In my implementation, I mapped the UV coordinates to pixel coordinates on the texture image first, and then computed transformation matrix. The mapping (combined with floating point imprecision) caused some rounding issues, (since pixel coordinates have to be integers), which caused singular matrices during the solving for the transformation matrices. My hacky solution was to offset one of the pixel coordinates for one of the UV points by 1 pixel. I figured 1 pixel wasn’t too discernible on the final printed pattern.

For example:

model face

Above is the 3D model, with the highlighted face being the face of interest.

corresponding UV

That face corresponds to a face on the original UV map, with the UV coordinates (0.7153, -0.2275), (0.78, -0.1982), (0.7519, -0.0935), (0.7207, -0.0382).

UV overlay

As you can see, the UVs map the texture image to the 3D model.

particular UV face

That particular UV face govern a small section on the texture image.

new UVs

The highlighted face on the 3D model also corresponds to a face on the new UV map I’ve created.

Its coordinates are (0.046143, 0.63782), (0.133411, 0.683826), (0.09056, 0.660572), (0.108221, 0.6849).

Given the two sets of UV coordinates, I break the UV quadrilateral down into two triangles and compute the transformation matrix.

To compute the transformation matrix, the equation is set up like thus:

W = A \times Z

where W is a matrix with the coordinates of the new UVs, A is the transformation matrix, and Z is a matrix with the coordinates of the old UVs.

Due to the use of homogenous coordinates, W and Z are 3×3 square matrices with their last rows being [1 1 1], and A is also a 3×3 square matrix with its last row being [0 0 1].
See affine transformations for more details.

Populating our matrices with the actual coordinates gives the following two equations. The original UV coordinates map to pixel coordinates (384, 72), (396, 80), (401, 67), (383, 61). The new UV coordinates map to (29, 174), (23, 185), (33, 188), (35, 172). I use the pixel coordinates for the transformation.

\begin{bmatrix} 29 & 23 & 33 \\ 174 & 185 & 188 \\ 1 & 1 & 1 \end{bmatrix} = \begin{bmatrix} a & b & c\\ d & e & f\\ 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} 384 & 396 & 401\\ 72 & 80 & 67\\ 1 & 1 & 1 \end{bmatrix}

\begin{bmatrix} 33 & 35 & 29\\ 188 & 172 & 174\\ 1 & 1 & 1 \end{bmatrix} = \begin{bmatrix} i & j & k\\ l & m & n\\ 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} 401 & 383 & 384\\ 67 & 61 & 72\\ 1 & 1 & 1 \end{bmatrix}

As mentioned before, there are two equations because I’m breaking the quadrilateral into two triangles.

To solve for A, I could take the inverse of Z and multiply it to W. Since Z is a square matrix, Z is invertible because its determinant is non-zero. Z’s determinant is non-zero because the determinant represents the area of the triangle it encompasses.

W \times Z^\intercal = A \times Z \times Z^\intercal

W \times Z^\intercal = A

However, in the actual implementation, I solved it in a more straightforward manner by carrying out the matrix multiplication between A and Z and solving a system of unknowns. Read more about it here.

When applied to the texture image region that the original UV governs, I get the following transformed texture image piece:


After transforming each of the texture image regions, you get the following texture image which you can then print.

The orange arrow indicates where the transformed texture piece fits into the whole texture image.

transformed texture image

There you have it, a more theoretical/mathematical explanation of how to create sewing patterns from 3D models.

Today I saw a post about Emendo, a program for mesh repair, and became a little miffed. It costs roughly $50, which I think is pretty steep for something you could do in a few lines of code in Python with the Blender API. Even if you’re not a programmer, there are $0 solutions like netfabb Basic and free software like MeshLab and Blender. I can’t attest to the quality of Emendo, but it has to be amazing¬†to be worth that price.

So,¬†let’s talk mesh repair: what is it, why it’s hard, and why I¬†would or wouldn’t spend $50 on it.

Mesh repair is essentially fixing up a 3D model so it can satisfy 3D printing constraints. I became interested in mesh repair when I was modeling a small figurine. When designing, I cared more about aesthetics of the model than the printability of the model. I figured there would be tools to magically and automatically correct my model to be printable.

There is such a tool. It’s called netfabb.

Netfabb is a software that’s pretty good at fixing all sorts of problems. It’s also popular among 3D printing enthusiasts and is leveraged by businesses and 3D printer manufacturers (like Shapeways, Formlabs, and Figulo).

Unfortunately, my model turned out to be disastrously hard to repair. Netfabb failed, and I ended up concocting a solution out of both netfabb Basic and MeshLab after days of struggle.

Rakdos figurine

Turning a model of any shape into a printable model is a hard problem. For example, suppose you have holes in your model that makes it unprintable. The mesh repair software will try to patch those holes. These holes are big gaps of missing information and the software has to guess what should go in them based on the topology of each hole’s surroundings. While some solutions can be easily guessed, other guesses can create more problems and can even conflict with one another. Here are wonderful visualizations of capping a hole in three different and valid ways, illustrating the underconstrained nature of the hole problem. Holes are just one of the many problems a 3D model could have.

There are many algorithms for mesh repair, but none of them are perfect. Here is an excellent introduction to various algorithms and the tradeoffs between them. Here is a more recent and more in-depth look at different algorithms and their features.

I use my figurine model to test the “robustness” of different mesh repair software. It may not be a fair test, since my model may be a very hard edge case, but I’m testing mostly to satisfy my curiosity. Here are some results:

netfabb Basic 4.9: failed

Netfabb flags something as unprintable with a big red caution sign. Even after repairs, the caution sign remained. Netfabb Cloud also failed. I believe the most recent version of netfabb Basic will successfully repair the model. Since netfabb is such a staple tool, I determine the success or failure of other mesh repair programs by importing the fixed models into netfabb and recording whether netfabb determines if they are printable or not.

Autodesk Meshmixer version 10.3.44: failed

Autodesk 3D Print Utility 1.1.1: succeeded

The 3D print utility actually succeeded in the mesh repair. However, it took about 3 hours for the repair to complete on my laptop, and the resulting mesh had a tremendous loss of resolution.

Photoshop CC: failed

I don’t know what version of Photoshop CC I used, but it was a 30-day trial from this past January. The repaired model still had issues, but could subsequently be repaired by netfabb. Photoshop actually emailed me asking for the figurine model, so I suspect they might’ve upgraded their algorithm.

GitHub: failed


Custom Blender Script: succeeded

Over the summer, when I was working on 3D printing glasses from 2D designs, I realized I need to repair the glasses models. I started to write a mesh repair script and decided to design it to repair the figurine. My script does indeed fix the model, but it takes about five minutes to run on the figurine model. The script produces better resolution in less time than Autodesk’s 3D Print Utility, but it’s not the fastest since I’m looping multiple times. I’ve decided to release it as a Blender Add-On, in the 3D Print Toolbox, available now in Blender 2.72b. Hope it’s helpful!

3D Print Add-On

The script’s algorithm works better on simple models or models with dense polygons. If patching holes result in non-manifold geometry, the script will try to delete vertices around the holes and then patch them again. If the polygons are dense, removing a few shouldn’t affect the overall aesthetics.

It probably won’t work for all models, though.

$50 for a mesh repair software that repairs all models is absolutely worth it, for all the time invested spent in manual repairs. However, given the diversity of models and the diversity of problems (some of which could be caused by printer-dependent constraints), I doubt there’s such a silver bullet right now.

Between the Blender Add-On¬†and netfabb, you have a pretty good chance that your model will get repaired. However, should you need to, I advocate using Blender’s Python API to build your own repair tool. It will allow you to create something catered to your specific modeling needs. It might not be as fast as pressing a button, but you’ll have unparalleled customization ability. I hear the paid upgraded versions of netfabb give you a lot of customization choices in mesh repair, but I doubt it would be as thorough as interacting with the models directly. The Blender API already provides some nice helper functions like bpy.ops.mesh.fill to fill in faces and bpy.ops.mesh.select_non_manifold to select non-manifold vertices. You could even update the Add-On and release it back to the Blender community.