-
Notifications
You must be signed in to change notification settings - Fork 2
Taiga Task 426 : Deployment of FLOSS document management system #45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
* run web container from local source * publish ports to the host system * keep database state in `data/` subfolder * ignore `data/` subfolder from Dockerized database
…e globals from tests
|
@acorbi Please feel free to extend the description of this PR, if you see additional aspects which need to be present for merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@almereyda See individual comment within the code plus the follwing questions:
-
I understand correctly that the Data API (data.transformap.co) serves as a proxy for the media-related API calls by exposing a /media/ endpoint additionally to the /place/ endpoint?
-
I do not see any references to the different versions of the media's metadata. I understand this is something not yet contemplated here and might be conceived at a later point. For reference, see my draft definition on https://github.com/acorbi/transformap-editor/blob/impl-mocking-libs/app/lib/MMS_API.md. Do you agree with it in general? Would you structure the payloads somehow differently?
UPDATE: See https://tree.taiga.io/project/transformap/task/418 for reference
| const put = (url, body) => breq.put(url, body).then(_.property('body')) | ||
| const delete_ = (url) => breq.delete(url).then(_.property('body')) | ||
|
|
||
| const endpoint = CONFIG.server.url() + '/media' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not see the uuid of the POI the media file(s) are assigned to anywhere.
- Should not there be a reference to it so the uploaded/retrieved media(s) can be related to a certain POI?
As in, if I POST a new media file stored on ipfs ( for example) by POSTing a json file similar to test/fixtures/media-new-ipfs.json to https://data.transformap.co/media/, how does the API know which POI I want to upload it to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can happen decoupled from the storage of media metadata. It would be up to the user client to update the respective /place/UUID document with an array of UUIDs of associated media.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@almereyda OK, it is very decoupled indeed but sounds good to me sofar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eventual consistency ;)
| }) | ||
| }) | ||
| }) | ||
| describe('POST image', function () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not see any metadata being POSTed along the BLOB in this case.
- How is the metadata being uploaded? Should not there be a way to POST it along the BLOB?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are multiple ways to post it along the BLOB. Either the client pushes additional associated metadata, such as a name atrribute, in a second PUT request, which would not need any further modification of the data service, or multiple POST requests are associated to each other by relying on multipart file streams, which requires more investigation and implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@almereyda ok, got it. yeah, a second PUT request sounds easier to implement allthough it could introduce the case where a media file does not have the required metadata (so far only name, correct?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The content disposition header of an HTTP POST upload contains a value filename, which we could sanitizingly infer a name from. Then metadata is detected to a broad extent.
Still, this involves correct handling of multipart.
Asking how to prepare this for compatibility with IIIF from here is left to succeeding iterations on the codebase. I may safely conclude KISS applies for now.
almereyda
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Yes, the
/media/endpoint is intended for a generalised media management ofmediadata types, decoupled from objects of the typeplace. - Versioning is implemented in a generalised way for all
thingswithin T418 by @ponder2342, why the choice fell to separatemediaandplacedata from each other. A connection can be reintroduced by the user client via adding additional data to aplaceobject, as formulated in my comments to your code comments.
Where would you document this kind of specification, anything in particular missing?
@thoka always pointed at the fact that there is no design paper about the data service.
Eventually @jum-s, @maxlath and me find some time next week to scribble such a writing?
| }) | ||
| }) | ||
| }) | ||
| describe('POST image', function () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are multiple ways to post it along the BLOB. Either the client pushes additional associated metadata, such as a name atrribute, in a second PUT request, which would not need any further modification of the data service, or multiple POST requests are associated to each other by relying on multipart file streams, which requires more investigation and implementation.
| const put = (url, body) => breq.put(url, body).then(_.property('body')) | ||
| const delete_ = (url) => breq.delete(url).then(_.property('body')) | ||
|
|
||
| const endpoint = CONFIG.server.url() + '/media' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can happen decoupled from the storage of media metadata. It would be up to the user client to update the respective /place/UUID document with an array of UUIDs of associated media.
|
This is slowly evolving, also see https://hack.allmende.io/ecobytes-20170621-mms-research?both#17092017 |
| .then(res.json.bind(res)) | ||
| .catch(error_.Handler(res)) | ||
| if (req.files) { | ||
| lib.upload(req.body, req.files.mediaUpload) // code smell: hard coded input id, configurable, or per type? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do I understand correctly that:
req.body would be the metadata of the media file and req.files.mediaUpload the binary contents of the actual asset? Are they both POSTed on the same API method call?
On https://hack.allmende.io/transformaps-20170926-development?view#api-calls-involved-on-uploading-a-media-file-and-associating-it-with-a-poi (and as currently implemented) I am envisioning to do this in 2 calls...
If my statement is correct, our implementations are currently dissaligned. Please confirm or elaborate so we can align both implementations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the sequence diagrams I learn you expect to be POSTing a BLOB to a separate /blob endpoint associated to any media/:id route. From this I like the idea to separate the creation of a media metadata container (thing) from its filling. Yet I suggest an upload can be triggered in presence of a multipart stream in any PUT action and thus more tightly couple this to a single medium's endpoint.
There may be the need for additional handling of the side effects, i.e. on other things such as place, where one might want to avoid allowing for binary uploads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@almereyda I have been studying your proposal of coupling the upload of the binary contents of the asset together with its metadata in a POST (creation) or PUT (update) method on /media/ and came up with following 'dilemma':
As described on https://hack.allmende.io/transformaps-20170926-development?view#api-calls-involved-on-updating-a-poi-by-uploading-a-new-asset and according to the current implementation, updating a media file does not involve a PUT call to the media endpoint but a POST call to media/{mediaId}/version with the goal of storing a new version object on the DB and not updating the media file's metadata.
I see 2 options at this point to achieve an implementation which would make sense to me:
-
To leave the mutipart upload of the asset decoupled from the creation (POST) or update of a certain media file, as it is currently the case.
-
To delegate the responsibility of the creation of new versions of a media file (journaling) to the data service. This means that the editor would not make any POST calls to
media/{mediaId}/versionsas it is currently the case ( see https://github.com/TransforMap/transformap-editor/pull/42/files#diff-e2815ec3044d7d3fc5c27a5a3ed1b358R115). The editor would just make a PUT call tomedia/{mediaId}and the data service would then store a new version of it.
|
I am currently reorganising my thoughts around adding to IPFS and updating the If we switched from the The public and private IPFS daemons may as well be linked to each other by an |
Refering to #426 Deployment of FLOSS document management system.
This pull request combines commits which provide a media management engine to the data service.
mediamodel and controllers