Bug-hunting, V-Bit tools, GPU-powered image ops, live cut simulation, auto-update core


Tool diagrams are all situated now, so users can get a visual depiction of the cutting tool's profile for which they've entered dimensions for. There's also now a 3rd cutting tool: the v-bit. Internally this is just treated as a tapered cutter with a tip-diameter of zero, and instead of users entering the flute-length (as is typical of specifying a tapered end-mill) they enter the diameter of the bit and the flute-length is calculated automagically.

While working on all this I discovered a horrible bug that generates the wrong toolpaths for tapered cutters with a taper angle other than 45deg (i.e. a tip-angle other than 90). A narrower bit will generate a toolpath that goes too deep and a wider bit angle will generate a toolpath that travels too shallow. I'm surprised nobody has discovered this yet and reported it. Either everybody is using 45/90 bits or just isn't using PixelCNC for any sort of v-carving (via the 'medial axis' operation). This has been corrected for v1.20a, although I'm tempted to add the fix to the v1.18a code and release that really quick as v1.19a just because it is pretty significant. I've decided, however, to just hold onto it for v1.20a unless someone lets me know they need to be able to use non-90deg bits.

I've also managed to get the most expensive image operations (a sort of Minkowski-sum convolution) moved onto the GPU. This currently only works on graphics hardware that supports OpenGL v3.0+ but hopefully this will change soon and also work on at least as far back as OpenGL v2.1, which will expand the number of devices which aren't forced to fallback on the slower CPU-based code. Ironically it's the slowest and oldest machines that will be forced to run the slowest code.

I lost quite a bit of time there hunting down some bugs that proved to be some of the worst I had ever dealt with. One of which was causing the new GPU code to generate randomly occurring black pixels in the output. This ended up being a problem with the RAM in my desktop machine, which came to a head later one night that caused the whole machine to just stop working entirely. I initially believed the problem to be caused by the GPU itself (it's a bit old and tired) but was relieved to determine it was just a worn out 2GB stick of RAM causing all of these problems.

The second bug I hunted down involved PixelCNC not working properly on my dual-core Celeron netbook with the new GPU code added in. There didn't seem to be any reason for it to not work because it worked fine when generating toolpaths but it wasn't working properly when generating the simulation depthmap for an operation. Once an operation completed generating its toolpath and started generating its depthmap for the simulation system the main program window would just stop rendering. The program wouldn't freeze, it would just stop updating everything being drawn in the window. I spent a day gutting out what I had narrowed down to be the offending code and added it back in piece-wise, so I could pinpoint the exact code that was causing it to glitch out on the netbook. I eventually had added all the code back in and never found the problem - it just worked fine! So, that was a day spent where I didn't learn what *not* to do in the future. I didn't learn anything. Admittedly I did re-arrange a few things while adding the code back in piece-by-piece, just to clean it up and make it easier to follow, and that's all I can think that could've possibly fixed it. So at least it works. I've still yet to test it on my three other machines. My fingers are crossed that I don't run into another silly experience like that one.

Anyway, the new GPU-based image operation code will also power the core of the new and improved simulation system which will allow viewing an operation's cuts as a function of time as well as timeline scrubbing an operation from beginning-to-end. The original multithreaded CPU-powered image operation code is just too slow to be usable for real-time simulation. Since the new GPU image operation code has been added, generating a mesh from a depthmap is now currently the slowest functionality in PixelCNC. In order to do anything real-time with depthmap meshes in PixelCNC it's going to require some sort of strategy or technique for minimizing the computation needed to evolve the mesh to reflect the real-time changes occurring to the simulation depthmap. The next big chunk of code involves expanding on the existing meshing system so that the simulation system can incrementally 'update' an existing binary-tree-triangle mesh using an updated depthmap.

The naive approach would be to just re-build the entire mesh from scratch each time the simulation depthmap changes. The meshing system in its current form just simply subdivides a mesh's triangles from scratch using a source image, but the simulation playback will require that a mesh can both subdivide *and* merge triangles, which requires modifying the existing simple linear triangle-node pool allocator to be able to both free merged triangles back to the pool so that they can be re-used in later allocations again as needed. This is nothing too complex but making it fast is the tricky part. I'll have to brush up on my pool allocator optimization techniques.

An alternate strategy is to setup the simulation system to just operate on a grid of smaller meshes that are generated from sub-regions of the simulation depthmap. Then instead of modifying these smaller meshes by being able to both subdivide and merge triangle using the above-described dynamic reallocation stuff it'd just use the existing code that simply builds meshes from scratch, only subdividing triangles, and just rebuild the individual grid square meshes that are affected by changes to the simulation depthmap. So instead of having to deal with one big mesh and compare all of its existing leaf-triangles to it'd localize changes to the individual sub-regions that are actually affected by changes to the depthmap. This seems kinda iffy and potentially equally as slow as the worst-case of the strategy mentioned above, while that strategy seems like it has the greatest potential for optimizations and tweaks to speed it up. However, the final solution will likely be some kind of combination or hybrid.

The last large component that I'm working on is the beginnings of the auto-update system. What will actually exist in v1.20a is just a notification system that tells users upon starting PixelCNC if a new version is available for download. At that point they will need to manually download and install the new version for the time being. It'd be easy to give the user a button to push and have it open the PixelCNC page for them to download it from, so I'm sure I'll at least have it provide that convenience. This is really just meant to get the beginnings of the networking code and update server going as a foundation upon which actual self-updating functionality will be built. This simple notification functionality at least saves users the trouble of having to manually go and check the PixelCNC page, or social media just to determine if there's a new version they can update to yet. Seeing as how most people will likely use PixelCNC only when they need it, and not every single day, this simple addition should prove rather handy. This will continue being built upon over time until it becomes a fully-functional auto-update system.

On top of all this a bunch of other little things are being added, fixed, changed, etc.. some of which I mentioned in the previous devblog post.  I'll be sure to post a bit more on here than I have been as of late, so stay tuned!

Leave a comment

Log in with itch.io to leave a comment.