automatically created raw and scaled output:
Scan-to-printed copy in 40mins done with 100 photos
(copy is working ;)
I have been working on this feature for a long time and it is getting close to completion. I am happy to introduce the OpenScan Cloud, where you will be able to automatically process the image sets from the OpenScan devices. Of course you will always be able to do the processing locally, but I hope that this step will make the OpenScan device even more user-friendly.
2020-02-15 There have been several very successful tests and I am really happy with the overall functionality. For further general purpose testing, I have added a (temporary valid) public Cloud ID which can be used to test the service. A detailed setup instruction can be found on Github.
Please note, that for future use, you will need a private Cloud ID, which you can get by email (email@example.com). Please let me know, which device you are using and/or what your main use case will be. Thank you
2020-02-05 still very early phase of testing. People who are able to access their Raspberry Pi via command line are welcome to join the testing. Please register below to get your Cloud ID and an instruction on how to setup the user interface. (It might take several days for me to answer :)
one-time setup by registering with name + email address
one click to upload the files and start the processing
the result will be made available via dropbox link and can be downloaded via the OpenScan User Interface (see below)
automated scaling (!) with accuracy of up to 0.5% of the objects size (only works with the OpenScan Mini. Scale accuracy is depending on the surface quality of the object)
some automated feedback on the reconstruction & quality of the result
There will be a credit-based usage limit for each user. I will try to keep this limit as high as possible, but I will need to see how much load the servers will be able to handle. Anyway the system is designed to be scaleable and I intend to cover the cost with some kind of voluntary monthly subscription model.
In order to improve the systems capabilities, I will use the uploaded image sets to further test and refine the reconstruction algorithm. Please bare in mind not to upload any sensitive material.
max. 200 images and/or 300mb per set (which will be increased in the future)
no texture (yet)
- result with reduced polygon count to 100k (which should be more then enough for most objects)
file storage 14 days
This is still a very early version and I will move certain buttons and fields to different sections of the user interface.
Short Demo Video :)
Before continuing, please feel free to share any ideas/problems/thoughts with me. Thank you!
As I am not a great fan of the cloud-processing hype myself, I wanted to at least limit number of involved parties (i.e. not migrating the processing to AWS or similar, but instead do it locally). I want the setup to be as transparent and as secure as possible. Therefore I have designed the following setup:
Externally, I am using Dropbox file storage, in order to increase the transfer speed independently of the users location.
Internally, there are three devices. Where one is managing the user requests, the second one is managing the files and the third one is doing all the work. (Welcome to modern era, where two instances are there only for managing and only one instance is doing the heavy-lifting...).
In order to maximize security only Server 1 can be reached from the outside world via http-get requests. At the same time this server does not have access to any of the photo sets. The other two servers are isolated and only able to communicate with each other, but not with the outside.
If necessary, the system is designed in a way, that allows to add additional computing capabilities/workers next to server 3.