Clarification on Marching Cubes Algorithm

In regards of Marching Cubes, i have some doubts regarding its algorithm and implementation. I have gone through the excellent Paul Bourke article of Marching Cubes and also the available source code on the site, yet, i still encountered some problems in term of understanding as well as how to implement the algo in my own way. The questions are as below:

  • Gridcell size - I have read that gridcell size affects the quality of the produced 3D model. For example if i have a stack of xray image set with size of (200*200*200), so, a slab of gridcells will be constructed from 2 adjacent slices of image. Thus, the total of the gridcells in a slab would be (200-1)*(200-1) with each of the gridcell corner correspond to the pixel value/density of the image. Is this correct?? Besides, how do we implement different size for gridcell??

  • Voxel size - I have read the few references of Marching Cubes and i cant seem to find how voxel size is taken care of in the algorithm. Please correct me if i am wrong, in my case, the gap between the adjacent layer of images is 1 mil in size; thus, how do i take care of those in Marching Cubes algo or is it a dead end?? Is it taken care of as the size of Gridcell?? (Assumption: the size of one pixel in its xy coordinate is 19 micron while the gap/z is 25.4 micron/1 mil in length)

  • Coordinates of the Gridcell corner(Vertices Coordinates of the cube) - I am trying to assign the coordinates of the corners of gridcells with index ijk by nested looping of the image set dimension (200*200*200). Is this correct?? Is there any faster way to do it??

  • Note: i have seen implementation of MC in VTK but it is quite hard for me to digest it as it depends on some other VTK classes.


    Lots of questions. I am going to try to give some pointers. First of 200^3 is a pretty small dataset for ct! what about 1024^3? :)

    Marching cubes is built for regular grids. So wether data is defined at cube vertices or centers really does not matter: just shift by half the cube size! If you have irregular data use something else or resample to a regular grid first.

    You also seem to be missing the "marching" part: The idea is to find one cube with the surface and flood fill out from there. Cubes that are all outside or all inside stop the search. That way most cubes in your huge regular grid do not even need to be looked at.

    Scaling to real units should be a last step. Think of the input volume as normalized to 1x1x1. Then scale the output vertices to physical units. The data you have is the data you have. Any resampling should be done before during reconstruction or filtering. It has no place in the geometry stage.

    I am not sure I understand the last question, but one thing that is really important for further processing is to create a connected, indexed mesh. One important trick there is to just keep a kind of hash table of the previous slice/line/neighbor. So you can quickly look up already created vertices and reuse their index. The result should be a connected mesh with unique vertices. This you can then use in any kind of geometry processing.

    链接地址: http://www.djcxy.com/p/6354.html

    上一篇: 行进多维数据集isovalue

    下一篇: 澄清Marching Cubes算法