Virtual File System #694
blunted2night
started this conversation in
Show and tell
Replies: 1 comment 1 reply
-
Sounds like we had very similar ideas 😄 A couple of hours before you posted this i submitted this pr #693, which among many other changes adds an "AssetIo" abstraction. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have created a branch that implements a plug-able a virtual file system for asset loading. It is mostly functional except for hot-reloading, the ability to preload entire directories, and WASM are currently broken.
It works by adding a list of "asset storage providers" to the asset server and an API to add new providers to that list. A default provider that implements the current behavior is automatically added to the assert server when it is created. When a request for an asset is made, the set of providers are queried in order until one returns the file contents. The provider may return the data as a reference or a vector depending on what is convenient for the provider.
Once the asset data is available it is passed to the existing asset loader with some minor changes. The asset loader now takes the storage provider's "asset storage" enum instead of a vector of bytes. This allows for zero copy asset loading from memory mapped or embedded asset archives.
For Example
This is a storage provider that can serve files that are embedded directly into the executable:
A data table for it can be specified like this:
And configured into the app:
Then accessed:
There is certainly room for improvement
The names of entities should probably be improved, for example
AssetStorage
might better byAssetContent
or eventAssetBytes
.Some care was required to allow the file open function to return a reference, but it comes down to: as long a reference to a "provider list"
Arc
or the "resolver"Arc
is held, a reference to asset data can also be held.An asset loader should be given an opportunity to clone the providers
Arc
, so that it may extend the lifetime of the data reference if the desire exists.It's not clear to me what the difference between sync and async loading is in WASM is. I see that sync calls are done via the standard library where as the async seems to use the browsers JavaScript API. It shouldn't be a problem to preserve the current behavior though.
The hot-reloading (at-least the monitoring portion) would need to be put in the storage provider, but the code now works in a global static way that I'm not sure how to transform.
The fix for folder pre-loading I guess would be for each storage provider to return a list of filenames, or take a closure that can be invoked on each file within folder that provider can see. Then asset server would filter potential duplicates present in different providers.
If anyone is interested, it is accessible from my fork under the
vfs
branch. Any feedback is appreciated. Thanks you for you time.Beta Was this translation helpful? Give feedback.
All reactions