Earlier this year, I wrote a tutorial for MAKE Magazine, on how to create stuffed animals of video game characters. The technique took a given 3D model of the character, and its texture, and programmatically generated the sewing pattern. While I’ve written a general summary and uploaded the source code to GitHub, I’m here to write a more in-depth explanation of the mathematics that makes all this possible.

My goal of the project was to create a printable sewing pattern, that once sewn together, would approximate the starting 3D model (in this case, a video game character). The gist of my technique is to use the 3D model’s texture image file as the sewing pattern. The texture image should be able to join at its UV seams to reconstitute the original 3D shape. The initial texture image for a 3D model might not be optimized for sewing reconstruction, but can be remedied by creating a new set of UVs (with seams more optimized for sewing) from the original model. Given the original UVs and the new UVs, a transformation matrix can be calculated for each face to transform the old texture image into a new optimized texture image. The resolution of the sewn reconstruction depends on location of the seams and the amount of distortion by the UV unwrapping algorithm.

As described in the general summary, a 3D model is composed of a couple of different features. It has vertices, edges, and faces that define its 3D shape. It also has a set of UV coordinates, that define how texture is projected onto each face. Lastly, it has the texture image that define how the 3D model is colored.

UV mapping, the process of projecting the 3D face onto a 2D texture surface, is pretty well studied in the world of computer graphics. Each face of the 3D model is mapped to a face on the UV map. Each face on the UV map corresponds to one face on the 3D model, and the UV preserves the edge relationships between faces of the 3D model. Yuki Igarashi, Ph.D from the University of Tsukuba realized this feature of UVs, and used them to create sewing patterns from dynamically created 3D models in her papers Plushie: An Interactive Design System for Plush Toys (SIGGRAPH 2006) and Pillow: Interactive Flattening of a 3D Model for Plush Toy Design (SIGGRAPH 2007). The specific algorithm of her UV mapping was ABF++.

Since UV maps can be used as sewing patterns, so can the texture image, since UVs map the texture image onto the 3D model. The texture can be printed onto fabric, and the resulting stuffed animal sewn from the pattern will preserve coloring information of the original 3D model.

However, not every UV map is optimized for sewing pattern creation. As you can see above, UVs are folded onto of each other, so the the and body are halved. This is a popular technique in video game graphics to save space. The head is also much bigger than the body, so that the head will appear to have finer details in the video game. These optimizations are ill-suited for sewing patterns because we want the body to be roughly the same proportions in 3D space as 2D UV space.

The seams of the UV clusters will become seams on the final stuffed animal. Starting with the same 3D model, the seam placement will determine the resolution of of the final sewn creation.

The initial UVs for my model are not suited for stuffed animal creation, so I made my own UVs, to optimize them for sewing. Most modern 3D graphics software will have UV mapping functionality (Maya, Blender, 3ds Max, etc). For my project, I used UVLayout, a specialized UV mapping tool, but as seen in the MAKE Magazine article, Blender works just as well.

Armed with my newly minted UV map, I want to create a new texture map that correspond to it, to be printed as my final sewing pattern. Here is where linear algebra comes in handy.

The polygon faces on the UV map are broken down into triangles. Each triangle face on the old, original UV maps to a triangle on the new UV, through their relationship with the same face on the 3D model. Since the two triangles are represent the same shape, but with different coordinates on the UV map, a transformation matrix can be calculated between the two triangles. Triangles are used because we want to work with square matrices for computation. That transformation matrix can be used to transform the corresponding triangular area on the old texture to color a new triangular area on the new texture. Stackoverflow has a good explanation of how to compute that transformation matrix given the coordinates of two triangles and a useful code snippet which I used.

If you compute the transformation matrix for each UV triangle and transform its corresponding texture triangle, the end result will be a new texture. If you apply the new texture and the new UVs to the original 3D model, there should be no difference in its visual appearance.

In my implementation, I mapped the UV coordinates to pixel coordinates on the texture image first, and then computed transformation matrix. The mapping (combined with floating point imprecision) caused some rounding issues, (since pixel coordinates have to be integers), which caused singular matrices during the solving for the transformation matrices. My hacky solution was to offset one of the pixel coordinates for one of the UV points by 1 pixel. I figured 1 pixel wasn’t too discernible on the final printed pattern.

For example:

Above is the 3D model, with the highlighted face being the face of interest.

That face corresponds to a face on the original UV map, with the UV coordinates (0.7153, -0.2275), (0.78, -0.1982), (0.7519, -0.0935), (0.7207, -0.0382).

As you can see, the UVs map the texture image to the 3D model.

That particular UV face govern a small section on the texture image.

The highlighted face on the 3D model also corresponds to a face on the new UV map I’ve created.

Its coordinates are (0.046143, 0.63782), (0.133411, 0.683826), (0.09056, 0.660572), (0.108221, 0.6849).

Given the two sets of UV coordinates, I break the UV quadrilateral down into two triangles and compute the transformation matrix.

To compute the transformation matrix, the equation is set up like thus:

where W is a matrix with the coordinates of the new UVs, A is the transformation matrix, and Z is a matrix with the coordinates of the old UVs.

Due to the use of homogenous coordinates, W and Z are 3×3 square matrices with their last rows being [1 1 1], and A is also a 3×3 square matrix with its last row being [0 0 1].

See affine transformations for more details.

Populating our matrices with the actual coordinates gives the following two equations. The original UV coordinates map to pixel coordinates (384, 72), (396, 80), (401, 67), (383, 61). The new UV coordinates map to (29, 174), (23, 185), (33, 188), (35, 172). I use the pixel coordinates for the transformation.

As mentioned before, there are two equations because I’m breaking the quadrilateral into two triangles.

To solve for A, I could take the inverse of Z and multiply it to W. Since Z is a square matrix, Z is invertible because its determinant is non-zero. Z’s determinant is non-zero because the determinant represents the area of the triangle it encompasses.

However, in the actual implementation, I solved it in a more straightforward manner by carrying out the matrix multiplication between A and Z and solving a system of unknowns. Read more about it here.

When applied to the texture image region that the original UV governs, I get the following transformed texture image piece:

After transforming each of the texture image regions, you get the following texture image which you can then print.

The orange arrow indicates where the transformed texture piece fits into the whole texture image.

There you have it, a more theoretical/mathematical explanation of how to create sewing patterns from 3D models.