-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support talking to printers via json serial server #8
Comments
Do you have a protocol planned for your json serial server? Just curious as I have a project that has a fully functional backend with quite a bit of print functionality. We could easy implement one of our PrintFileProcessors to to listen to your json and print accordingly. |
Ahh sorry I missed your comment. Was gonna use Serial Port Json Server to talk to some kind of existing board / firmware. Is there and existing format that you support I can reuse? I could dump a set of images in a zip and some json information. I don't know what existing formats are available The slicer is all camera tricks, fragment/vertex shaders, and image processing. I am in the initial stages of defining the new advanced slicer, which using these techniques, will support external supports, internal supports, shelling ( possible hollow shells with their own infill pattern ), rafts, etc. Basically think of it as 'scanline/slice voxelization' where instead of building a octree slice by slice the output is a binary image immediately suitable for printing. If your 3D card can load it, it can be printed. |
I guess theoretically it should be relatively easy to port the slicer to Java 3D library of some sort. |
I think there are a few ways we could go, and we have the option to design our own interaction protocol. CWH (soon to be renamed Photonic3D) has a fairly robust set of motion templates (GCode templates and exposure/lift calculators), as well as a physical resolution calibration tool that can determine the X/Y pixel dimensions & pixel density (pixels/mm) based on an interactive UI. We also have support today for taking a zip of images and then printing it, which utilizes the motion templates but not the resolution calibration. Off the top of my head there are a few options (which are not necessarily mutually exclusive, but might be ordered in some kind of evolutionary roadmap):
I think for graphics card (e.g. WebGL) based acceleration, the browser is a better place to do this than with Java3D. The libraries necessary to get Java3D working in a hardware accelerated fashion on the Raspberry Pi are not very mature (I think a compatible OpenGL implementation was only checked into Jessie in February, and contains some rendering bugs) and don't have the kind of developer manpower and user base that WebGL has. |
So it seems offloading slicing to the managing computers browser might be a Well, could definitely add a "Hosted" mode which could pull printer On Mon, Mar 28, 2016 at 10:25 AM James Kao notifications@github.com wrote:
The meek shall inherit the Earth, for the brave will be among the stars. |
We are in the middle of "swaggerizing" our restful API, and you can follow our progress on that issue(area515/Photonic3D#188) to find out when that will be ready. We currently use Websockets as an event notification system. Even our websocket API has a very restful style where it's designed to connect directly to specific printers and printjobs. Once that socket is connected, you get events for for the object you specified in the URL. Here is a quick rundown:
It's very simple to consume. We are looking at integrating your slicer for two different functions:
As a part of the second function, I'd like to work out a vector based protocol(SVG?) so we don't have to ship these huge graphics back and forth to the server. I'd also like to work out a standard interface that we can both of us can agree on. |
The whole point of the slicer is to avoid vector slicing. It's rendered SVG would slow it As for the images, they could be shipped as PNGs or gifs. They are black I made a 2048x2048 black image and drew a bunch of white shapes on it and Its also perfectly possible to run the slicer 'headless' in the dom by Tell you what, I'll open a new ticket for tracking this idea. I am a java dev normally, and have gradle experience as well. I can fork or create a new repo to build a component that does what you need. |
see #11 |
I just want to say that when WebGL adds support for microtesselation, especially the kind directed by texture/vector maps, then its trivial to support it in the slicer. Users could then use a texture to modify a model, or tell the opengl context to 'smooth' it ( by dividing more via microtesselation ), and the slicer would basically support it automagically. |
The raster/vector issue isn't a deal breaker. Just keep in mind, that for the CPU savings we gain in WebGL we loose some in IO transfers, zip compression and zip expansion. I'm still pretty sure there is plenty of performance gain from the client slicing. You mentioned you were a Java coder, so you might be interested in taking a look at our parallel slicer for STL. It's fully functional and comes with a swing GUI to help debug slicing issues. The general benefit is that you don't have to perform a "preslice" phase and zip the contents before printing. Instead, the slice is performed in parallel to when your gcode commands and motors are printing the previous slice. The positive part to this approach is that the slicer really doesn't need to be that efficient because it just needs to be faster than the motors and exposure time of the previous slice. That's almost always attainable. The other benefit is that since we are in control of low level slicing, we are keeping track of non-manifold geometry and can report that information back to the rest client as errors. The problem with this approach is that the slicer has to be perfect in order to trust it, and ours needs some work. The other problem with this approach is that you don't have all of the cool features that come with WebGL as you've mentioned. Let me know if we can be of any help. |
No description provided.
The text was updated successfully, but these errors were encountered: