I’ve recently come across asciinema and I love it - its amazing - thanks for putting it out into the world!
I have been experimenting with a self-hosted asciinema-server running on GCP using Cloud Run as a way to share recordings of gemini-cli sessions with my colleagues. For the most part, its been working great.
As there is no native GCP support, I have mounted a cloud storage bucket directly on my container using GCS Fuse to land my recordings in a bucket. This also works pretty seamlessly.
However, one limitation I have run into is that Cloud Run only supports a max request size of 32 Mb, so any recordings longer than that get rejected by the client. The gemini-cli sessions can generate a lot of terminal output and easily blow past that limit.
So… how does the community feel about having native support for cloud storage on the backend?
I think it should be fairly straightforward to implement while keeping the existing functionality in place, maybe adding a new API endpoint on the server and some sort of fallback mechanism in the client if the file is either too large for direct upload or some config value is set. I’d also be interested in adding native GCP cloud storage support (in addition to the S3 support that’s there today) vs just using FUSE. I’ll probably do this in my own fork regardless but would love to see this land in the main codebase.
I don’t use (never used GCP), and so far nobody requested GCP, but I’m open for adding support for it, as long as you (or someone) is willing to commit to maintaining it.
Thanks for the reply - I think implementing the actual backend support for GCS should be fairly straightforward and I would be happy to support it. Using FUSE, I can basically use the local fs implementation today which works fine. But the limitation is that the file upload is being processed through the server.
Architecturally, I’m suggesting something slightly different here. Given that cloud storage is cheap and files can be arbitrarily large, it would be great to bypass the asciinema server and instead negotiate the upload directly with the S3 or GCP bucket on the client side. So this would be like implementing a new upload protocol of sorts. If S3/Azure/GCP is configured for backend storage, the server would generate a signed URL for the client which would then initiate the PUT request to the bucket.
I don’t know what are the common use cases for asciinema but I think its a really interesting piece of tech. My focus is on the AI/ML space where I think it has huge potential for capturing the behavior of terminal-based coding agents in a way that could be both agent and human interpretable, providing random-like access to given sessions. These sessions could be quite large - in my own testing, I personally see recordings easily growing up to several hundred MBs fairly quickly given the amount of text these agents chew through - hence the desire to interface with a cloud storage env directly.
I’ll make some forks of the client and server and test out this idea. (I actually already had jules attempt a native GCP implementation but I haven’t tested it yet).
This isn’t something I’m actually interested in. I get how it may solve the big file upload problem you have, but for 99% of asciinema self-hosters it wouldn’t solve much, while it would require major refactoring of how uploads are handled.
asciinema server needs to perform several operations on the new upload, one being generation of a snapshot, the other being generation of the plain text version of a recording and adding it to the search index. As long as the client would make an API call confirming the completed (direct) upload the server can do that fine - the file would just be delivered via another channel.
But, there’s one thing the server does which complicate the situation - the uploaded file is validated, and - only if it’s a valid asciicast (v1, v2, or v3) file - the recording is created in the database and the above processing jobs start.
By pushing the file to GCP/S3 directly we don’t have the easy gate keeper anymore. The server could validate the file when the CLI makes that completion API call, but that would either make the response to that call take a moment (server downloading and validating the file), or return immediately but potentially give a broken link (if server kicked of validation in the background and decided to deleted the file a moment later).
All of this is solvable one way or another, but it would bring a whole set of things to solve, and I just don’t see it being worth it.
Got it - ok - thanks for the additional context and details about the validation process.
I’ll look more closely at the code and get familiar with how this works. If I implement it in my fork and its not super-intrusive, I’ll run it by you.
I was thinking of this more like additive functionality to the existing client/server components - i.e. maybe an additional option to ascienama upload with a different endpoint on the server to broker the upload/PUT process.