The Mathematics Behind Creating Sewing Patterns From 3D Models

video game plushie

Video Game Plushie Photo by MAKE Magazine

Earlier this year, I wrote a tutorial for MAKE Magazine, on how to create stuffed animals of video game characters. The technique took a given 3D model of the character, and its texture, and programmatically generated the sewing pattern. While I’ve written a general summary and uploaded the source code to GitHub, I’m here to write a more in-depth explanation of the mathematics that makes all this possible.

My goal of the project was to create a printable sewing pattern, that once sewn together, would approximate the starting 3D model (in this case, a video game character). The gist of my technique is to use the 3D model’s texture image file as the sewing pattern. The texture image should be able to join at its UV seams to reconstitute the original 3D shape. The initial texture image for a 3D model might not be optimized for sewing reconstruction, but can be remedied by creating a new set of UVs (with seams more optimized for sewing) from the original model. Given the original UVs and the new UVs, a transformation matrix can be calculated for each face to transform the old texture image into a new optimized texture image. The resolution of the sewn reconstruction depends on location of the seams and the amount of distortion by the UV unwrapping algorithm.

As described in the general summary, a 3D model is composed of a couple of different features. It has vertices, edges, and faces that define its 3D shape. It also has a set of UV coordinates, that define how texture is projected onto each face. Lastly, it has the texture image that define how the 3D model is colored.

components of a 3D model

UV mapping, the process of projecting the 3D face onto a 2D texture surface, is pretty well studied in the world of computer graphics. Each face of the 3D model is mapped to a face on the UV map. Each face on the UV map corresponds to one face on the 3D model, and the UV preserves the edge relationships between faces of the 3D model. Yuki Igarashi, Ph.D from the University of Tsukuba realized this feature of UVs, and used them to create sewing patterns from dynamically created 3D models in her papers Plushie: An Interactive Design System for Plush Toys (SIGGRAPH 2006) and Pillow: Interactive Flattening of a 3D Model for Plush Toy Design (SIGGRAPH 2007). The specific algorithm of her UV mapping was ABF++.

Since UV maps can be used as sewing patterns, so can the texture image, since UVs map the texture image onto the 3D model. The texture can be printed onto fabric, and the resulting stuffed animal sewn from the pattern will preserve coloring information of the original 3D model.

However, not every UV map is optimized for sewing pattern creation. As you can see above, UVs are folded onto of each other, so the the and body are halved. This is a popular technique in video game graphics to save space. The head is also much bigger than the body, so that the head will appear to have finer details in the video game. These optimizations are ill-suited for sewing patterns because we want the body to be roughly the same proportions in 3D space as 2D UV space.

Differences in Final Resolution

Differences in Final Resolution, from Pillow by Igarashi

The seams of the UV clusters will become seams on the final stuffed animal. Starting with the same 3D model, the seam placement will determine the resolution of of the final sewn creation.

The initial UVs for my model are not suited for stuffed animal creation, so I made my own UVs, to optimize them for sewing. Most modern 3D graphics software will have UV mapping functionality (Maya, Blender, 3ds Max, etc). For my project, I used UVLayout, a specialized UV mapping tool, but as seen in the MAKE Magazine article, Blender works just as well.

UV maps

A Portion of My Final UV Map

Armed with my newly minted UV map, I want to create a new texture map that correspond to it, to be printed as my final sewing pattern. Here is where linear algebra comes in handy.

The polygon faces on the UV map are broken down into triangles. Each triangle face on the old, original UV maps to a triangle on the new UV, through their relationship with the same face on the 3D model. Since the two triangles are represent the same shape, but with different coordinates on the UV map, a transformation matrix can be calculated between the two triangles. Triangles are used because we want to work with square matrices for computation. That transformation matrix can be used to transform the corresponding triangular area on the old texture to color a new triangular area on the new texture. Stackoverflow has a good explanation of how to compute that transformation matrix given the coordinates of two triangles and a useful code snippet which I used.

If you compute the transformation matrix for each UV triangle and transform its corresponding texture triangle, the end result will be a new texture. If you apply the new texture and the new UVs to the original 3D model, there should be no difference in its visual appearance.

In my implementation, I mapped the UV coordinates to pixel coordinates on the texture image first, and then computed transformation matrix. The mapping (combined with floating point imprecision) caused some rounding issues, (since pixel coordinates have to be integers), which caused singular matrices during the solving for the transformation matrices. My hacky solution was to offset one of the pixel coordinates for one of the UV points by 1 pixel. I figured 1 pixel wasn’t too discernible on the final printed pattern.

For example:

model face

Above is the 3D model, with the highlighted face being the face of interest.

corresponding UV

That face corresponds to a face on the original UV map, with the UV coordinates (0.7153, -0.2275), (0.78, -0.1982), (0.7519, -0.0935), (0.7207, -0.0382).

UV overlay

As you can see, the UVs map the texture image to the 3D model.

particular UV face

That particular UV face govern a small section on the texture image.

new UVs

The highlighted face on the 3D model also corresponds to a face on the new UV map I’ve created.

Its coordinates are (0.046143, 0.63782), (0.133411, 0.683826), (0.09056, 0.660572), (0.108221, 0.6849).

Given the two sets of UV coordinates, I break the UV quadrilateral down into two triangles and compute the transformation matrix.

To compute the transformation matrix, the equation is set up like thus:

W = A \times Z

where W is a matrix with the coordinates of the new UVs, A is the transformation matrix, and Z is a matrix with the coordinates of the old UVs.

Due to the use of homogenous coordinates, W and Z are 3×3 square matrices with their last rows being [1 1 1], and A is also a 3×3 square matrix with its last row being [0 0 1].
See affine transformations for more details.

Populating our matrices with the actual coordinates gives the following two equations. The original UV coordinates map to pixel coordinates (384, 72), (396, 80), (401, 67), (383, 61). The new UV coordinates map to (29, 174), (23, 185), (33, 188), (35, 172). I use the pixel coordinates for the transformation.

\begin{bmatrix} 29 & 23 & 33 \\ 174 & 185 & 188 \\ 1 & 1 & 1 \end{bmatrix} = \begin{bmatrix} a & b & c\\ d & e & f\\ 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} 384 & 396 & 401\\ 72 & 80 & 67\\ 1 & 1 & 1 \end{bmatrix}

\begin{bmatrix} 33 & 35 & 29\\ 188 & 172 & 174\\ 1 & 1 & 1 \end{bmatrix} = \begin{bmatrix} i & j & k\\ l & m & n\\ 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} 401 & 383 & 384\\ 67 & 61 & 72\\ 1 & 1 & 1 \end{bmatrix}

As mentioned before, there are two equations because I’m breaking the quadrilateral into two triangles.

To solve for A, I could take the inverse of Z and multiply it to W. Since Z is a square matrix, Z is invertible because its determinant is non-zero. Z’s determinant is non-zero because the determinant represents the area of the triangle it encompasses.

W \times Z^\intercal = A \times Z \times Z^\intercal

W \times Z^\intercal = A

However, in the actual implementation, I solved it in a more straightforward manner by carrying out the matrix multiplication between A and Z and solving a system of unknowns. Read more about it here.

When applied to the texture image region that the original UV governs, I get the following transformed texture image piece:

piece

After transforming each of the texture image regions, you get the following texture image which you can then print.

The orange arrow indicates where the transformed texture piece fits into the whole texture image.

transformed texture image

There you have it, a more theoretical/mathematical explanation of how to create sewing patterns from 3D models.

7 comments
  1. Karolina said:

    Love it! I was thinking about this for years but I only started programming recently and I’ve no idea about 3D models what so ever. Could you maybe link me towards some learning resources you used or tell me more how you came up with the code because I would love to follow in your footsteps and see where it leads me.

    • Jenny said:

      Hi Karolina, if you’re interested in learning to program, here’s a good source on different learning materials (for Python): http://harryrschwartz.com/2012/10/13/learning-programming-with-python.html
      As for 3D modeling, I’d recommend watching Youtube videos. There’s a lot out there for various tools. My preferred tool is Blender. Start making a couple models and you’ll get a feel for how they work.
      The file format I’m working with is OBJ. Here’s the OBJ description.
      https://en.wikipedia.org/wiki/Wavefront_.obj_file
      As for how I came up with the project, you need to know some linear algebra. I think Khan Academy has some linear algebra videos.
      Basically, I realized UVs for a 3D model could be turned into sewing patterns. What’s more, the texture image could also be turned into a sewing pattern. I couldn’t use the given texture image, so I changed it using linear algebra.
      Hope this helps! Let me know if you have any more questions.

  2. Yaz said:

    Thank you for this! I’ve been wanting to use a 3D program for years to create plushies, but for some reason it’s not a widely used method of creating a template. I’m not a programmer so I could never create something that gets around the odd UV distortions that wouldn’t transfer very well to a 3D plush. I am however a 3D character artist and I’m pretty excited to be able to make my characters into plushies with your tutorial!

    Thank you very much! 🙂

    I just wanted to know, if I don’t want to print out a texture onto the fabric, can I still use the Python script and have it not search for the texture map? Should I just put in a flat colour image instead?

    • Jenny said:

      Yup, just use a flat colour image. Let me know if you run into issues or have questions.

  3. Matt Carter said:

    I can’t tell from the article, does your program modify the printable pattern in anyway to account for the properties of fabric (I believe Plushie does this) or does it just transfer an image from one uv set/model to another?

    • Jenny said:

      It makes sure the UV seams are the same length, since that’s not guaranteed by UVs.

  4. Anthony said:

    Hey just wanted to say thank you for the article and the tips. I came accross it this morning because I bought an interactive cat toy for my cat – its a fish with a motion sensor, battery and floppy motor. I wanted to remake the fish as a cat fish out of leather, but patterning out an organic for someone who works almost exclusively with flat objects is challenging. Its been several years since I’ve fired up 3Ds Max, but I’m so excited to go down this rabbit hole again. If at the end I get a badass cat fish plush, hell yes. Wonderful article, and thank you for sharing your knowledge.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: