-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Database loading #3
Comments
I think it might be useful to parse the databases into JavaScript objects (as you do now), then save them to disk in JSON format. Then they could be loaded much more quickly to reconstitute their object form. Of course, it would still be useful to separate the individual chips, so you only load the one you need for a given design. |
Yes, that may have been the reason. I don't recall exactly, but I do remember I had problems with the file:// origins and browser policies, and had to do work-arounds.
Sorry, I don't understand what exactly is broken? But I'm not really worried about the stand-alone mode. I'm not even sure anybody ever used it?
I think it is fine to drop the support for file: origin if it simplifies things. Using a local web server works fine, and if someone really need the stand-alone mode, maybe we can find another way to implement it that doesn't complicate the main, HTTP-based mode.
That sounds like a good idea. I haven't really done much client-side web development, so I haven't worked with Web Workers before, I assume it's a way to do the parsing as a separate step which sounds good as it's a performance bottleneck. Alternatively, the suggestion by @dalnefre to pre-parse the database and load it as json may make things sufficiently fast by itself - or the two ideas could be combined, loading the JSON in a web worker. |
Agree, this would be an interesting thing to see how much of a speedup it gives. I'm not actually sure why I did the parsing in the javascript, given how slow it is and how much it delays startup. |
I think combining the ideas would be ideal. Drop support for "file:" URLs and load a database (from the web-server) directly into a JavaScript object once the chip type has been determined. In addition, I think we should benchmark the browser's built-in JSON decoder against the OED binary format. The databases should be noticeably smaller in OED, so they should load faster over the network, but it may be slower to decode. It might also be worthwhile to compress the data for transport, but maybe the server/browser will do that for us. If OED is considered appropriate for this application, there is both a robust/complete implementation and a streamlined/fast implementation available to choose from. |
Hey Kristian, thanks for the useful project.
The perl script (gen_chipdb_js.pl) bundles the databases into a JavaScript file, which may be loaded - unlike text files - from
file:
origins. It appears to me that this is primarily used to support the standalone mode.In non-standalone mode, the examples dropdown is broken on the
file:
origin because the XMLHttpRequest for .asc files fails CORS (this may have only become an issue in the last few years, as browsers tightened their security policy). In standalone mode, the .asc file is bundled directly into the HTML and so (I am assuming) does still work.I think there might be a way to load the databases lazily, and keep the standalone mode. If it is possible to find a web server on the internet that serves the databases with permissive CORS headers (and Cache-Control headers!), they can be XMLHttpRequested directly as text and the perl script can be removed. Alternatively, dropping support for the
file:
origin would simplify everything tremendously, but necessitate a local web server. But I don't know how you'd feel about this.Also, if parsing was done in a Web Worker, there would be no need to "step" it in order to keep the UI responsive. The parsing of the .asc and databases could be done in parallel. But that's a different issue.
The text was updated successfully, but these errors were encountered: