Error Uploading: Unable to Slice File Part: Index Out of Range.
17 February 2021 update The example lawmaking has been added to excerpt the filename, so append it to the first upload. Without this step, your video title volition be 'blob.'
August 2021 update Since this post was written, we've published a library to simplify JavaScript upload of videos read the blog post to learn more than.
You can view the API reference documentation for the file upload endpoint here: Upload a video
Have you ever experienced an "file as well large" error when uploading a file? As our presentations, PDFs, files and videos get larger and larger, we are stretching remote servers' ability to accept our files. With merely a few lines of JavaScript, we can ensure that this mistake goes abroad, no thing what you are trying to upload. Keep reading to learn more than.
The nigh common fault with large uploads is the server response: HTTP 413: Request Entity too Large. Since the server is configured to merely accept files to a sure size, it will reject whatever file larger than that limit. One possible resolution would be to edit your server settings to allow for larger uploads, but sometimes this is not possible for security or other reasons. (If the server limit gets raised to 2GB for videos, imagine the images that might end up getting uploaded!)
Farther, if a large file fails during upload, you may have to outset the upload all once again. How many times take you gotten an "upload failed" at 95% complete? Utterly frustrating!
Segments/Chunks
When you watch a streaming video from api.video, Netflix or YouTube, the large video files are broken into smaller segments for manual. Then the player on your device reassembles the video segments to play dorsum in the correct lodge. What if we could exercise the aforementioned with our large file uploads? Break the large file into smaller segments and upload each one separately? We can, and even better, we can exercise it in a way that is seamless to our users!
Baked into JavaScript is the File API and the Hulk API, with full support across the browser mural:
This API lets us accept a large file from our customer, and use the browser locally to break it up into smaller segments, with our customers being none the wiser!
Allow'southward walk through how yous might use this to upload a large video to api.video.
To follow along, the lawmaking is available on Github, so feel costless to clone the repo and run it locally.
To build your own uploader like this, you'll demand a free api.video account. Use this to create a delegated upload token. It takes simply iii steps to create using Whorl and a final window.
A delegated token is a public upload primal, and anyone with this key can upload videos into your api.video account. We recommend that you identify a TTL (time to live) on your token, so that information technology expires equally before long equally the video is uploaded.
Now that you're back, nosotros'll brainstorm the process of uploading big files.
Markup
The HTML for our folio is bones (nosotros could pretty it up with CSS, but it's a demo 😛):
Add together a video here: <br> <input blazon= "file" id= "video-url-example" > <br> <br> <div id= "video-information" style= "width: fifty%" > < /div> <div id= "chunk-information" style= "width: 50%" > < /div>
There is an input field for a video file, and and so there are ii divs where we volition output information as the video uploads.
Side by side on the page is the <script>
department - and hither'due south where the heavy lifting will occur.
<script> const input = document . querySelector ( '#video-url-example' ) ; const url = "https://sandbox.api.video/upload?token=to1R5LOYV0091XN3GQva27OS" ; var chunkCounter; //intermission into 5 MB chunks fat minimum const chunkSize = 6000000 ; var videoId = "" ; var playerUrl = "" ;
We begin past creating some JavaScript variables:
-
input: the file input interface specified in the HTML.
-
url: the delegated upload url to api.video. The token in the lawmaking above (and on Github) points to a sandbox instance, and so videos will be watermarked and removed automatically afterwards 24-72 hours. If you've created a delegated token, supervene upon the url parameter 'to1R5LOYV0091XN3GQva27OS' with your token.
-
chunkCounter: Number of chunks that volition be created.
-
chunkSize: each chunk volition be 6,000,000 bytes - simply higher up the v MB minimum. For product, we can increase this to 100MB or similar.
-
videoId: the delegated upload will assign a videoId on the api.video service. This is used on subsequent uploads to identify the segments, ensuring that the video is identified properly for reassembly at the server.
-
playerUrl: Upon successful upload, this volition output the playback url for the api.video player.
Next, we create an EventListener on the input - when a file is added, split up the file and begin the upload process:
input. addEventListener ( 'change' , ( ) => { const file = input. files [ 0 ] ; //get the file name to name the file. If we do not proper name the file, the upload will be called 'blob' const filename = input. files [ 0 ] . name ; var numberofChunks = Math . ceil (file. size /chunkSize) ; document . getElementById ( "video-information" ) . innerHTML = "In that location will be " + numberofChunks + " chunks uploaded." var start = 0 ; var chunkEnd = start + chunkSize; //upload the first chunk to go the videoId createChunk (videoId, start) ;
Nosotros name the file uploaded every bit 'file'. To make up one's mind the number of chunks to upload, nosotros separate the file size by the chunk size. We round the number circular up, as any 'balance' less than 6M bytes will exist the final chunk to exist uploaded. This is and so written onto the folio for the user to see. (In a real production, your users probably practise not care virtually this, just for a demo, it is fun to encounter).
Slicing upward the file
The part createChunk slices up the file.
Next, we begin to break the file into chunks. Since the file is aught indexed, you lot might remember that the final byte of the clamper we create should be chunkSize -1
, and yous would be correct. All the same, we do not subtract one from the chunkSize. The reason why is constitute in a careful reading of the Blog.piece specification. This folio tells united states of america that the finish parameter is:
the showtime byte that will not exist included in the new Blob (i.e. the byte exactly at this index is non included).
So, we must use chunkSize
, every bit information technology will be the kickoff byte NOT included in the new Blob.
function createChunk ( videoId, start, end ) { chunkCounter++ ; console . log ( "created chunk: " , chunkCounter) ; chunkEnd = Math . min (showtime + chunkSize , file. size ) ; const chunk = file. piece (starting time, chunkEnd) ; console . log ( "i created a clamper of video" + beginning + "-" + chunkEnd + "minus ane " ) ; const chunkForm = new FormData ( ) ; if (videoId. length > 0 ) { //we have a videoId chunkForm. append ( 'videoId' , videoId) ; console . log ( "added videoId" ) ; } chunkForm. append ( 'file' , chunk, filename) ; console . log ( "added file" ) ; //created the chunk, now upload iit uploadChunk (chunkForm, starting time, chunkEnd) ; }
In the createChunk function, we determine which chunk we are uploading by incrementing the chunkCounter, and once more summate the end of the clamper (recollect that the last chunk will exist smaller than chunkSize, and only needs to go to the finish of the file).
In the beginning chunk uploaded, we suspend in the filename to name the file (if we omit this, the file volition be named 'blob.'
The bodily piece command
The file.slice
breaks up the video into a 'chunk' for upload. Nosotros've begun the process of cutting upwards the file!
We so create a form to upload the video segment to the API. After the first segment is uploaded, the API returns a videoId that must be included in subsequent segments (so that the backend knows which video to add the segment to).
On the first upload, the videoId has length zippo, so this is ignored. Nosotros add the chunk to the form, and then call the uploadChunk function to send this file to api.video. On subsequent uploads, the class volition have both the videoId and the video segment.
Uploading the clamper
Allow'southward walk through the uploadChunk office:
office uploadChunk ( chunkForm, beginning, chunkEnd ) { var oReq = new XMLHttpRequest ( ) ; oReq. upload . addEventListener ( "progress" , updateProgress) ; oReq. open ( "Postal service" , url, true ) ; var blobEnd = chunkEnd- ane ; var contentRange = "bytes " + get-go+ "-" + blobEnd+ "/" +file. size ; oReq. setRequestHeader ( "Content-Range" ,contentRange) ; console . log ( "Content-Range" , contentRange) ;
Nosotros boot off the upload by creating a XMLHttpRequest
to handle the upload. We add a listener and so we tin can track the upload progress.
adding a byterange header
When doing a partial upload, you need to tell the server which 'flake' of the file y'all are sending - nosotros use the byterange header to practise this.
Nosotros add a header to this request with the byterange of the chunk being uploaded.
Note that in this case, the end of the byterange should exist the last byte of the segment, and then this value is 1 byte smaller than the slice command we used to create the chunk.
The header will look something similar this:
Content-Range: bytes 0-999999/4582884
Upload progress updates
While the video chunk is uploading, we can update the upload progress on the page, so our user knows that everything is working properly. We created the progress listener at the beginning of the uploadChunk function. At present nosotros tin can define what it does:
function updateProgress ( oEvent ) { if (oEvent. lengthComputable ) { var percentComplete = Math . circular (oEvent. loaded / oEvent. total * 100 ) ; var totalPercentComplete = Math . round ( (chunkCounter - 1 ) /numberofChunks* 100 +percentComplete/numberofChunks) ; document . getElementById ( "clamper-information" ) . innerHTML = "Chunk # " + chunkCounter + " is " + percentComplete + "% uploaded. Total uploaded: " + totalPercentComplete + "%" ; // console.log (percentComplete); // ... } else { console . log ( "not computable" ) ; // Unable to compute progress information since the total size is unknown } }
First, nosotros practise a fiddling flake of math to compute the progress. For each chunk nosotros can summate the percentage uploaded (percentComplete
). Over again, a fun value for the demo, simply not useful for existent users.
What our users want is the totalPercentComplete
, a sum of the existing chunks uploaded, but the corporeality currently beingness uploaded.
For the sake of this demo, all of these values are written to the 'chunk-information' div on the page.
Clamper upload complete
Once a chunk is fully uploaded, we run the following code (in the onload event).
oReq. onload = function ( oEvent ) { // Uploaded. panel . log ( "uploaded chunk" ) ; console . log ( "oReq.response" , oReq. response ) ; var resp = JSON . parse (oReq. response ) videoId = resp. videoId ; //playerUrl = resp.assets.player; panel . log ( "videoId" ,videoId) ; //now we have the video ID - loop through and add together the remaining chunks //we commencement one chunk in, as we take uploaded the first one. //next chunk starts at + chunkSize from first start += chunkSize; //if get-go is smaller than file size - we have more than to still upload if (start<file. size ) { //create the new chunk createChunk (videoId, start) ; } else { //the video is fully uploaded. at that place will now be a url in the response playerUrl = resp. assets . player ; panel . log ( "all uploaded! Picket here: " ,playerUrl ) ; certificate . getElementById ( "video-information" ) . innerHTML = "all uploaded! Watch the video <a href=\'" + playerUrl + "\' target=\'_blank\'>hither</a>" ; } } ; oReq. send (chunkForm) ;
When the file segment is uploaded, the API returns a JSON response with the VideoId. Nosotros add this to the videoId variable, so it can be included in subsequent uploads.
To upload the side by side chunk, we increment the bytrange first
variable by the chunkSize. If we accept not reached the end of the file, we call the createChunk function with the videoId and the kickoff. This volition recursively upload each subsequent slice of the large file, continuing until nosotros reach the end of the file.
Upload consummate
When start > file.size
, we know that the file has been completely uploaded to the server, and our work is complete! In this instance, we know that the server can accept 5 MB files, and so we break up the video into many smaller segments to fit uder the server size maximum.
When the last segment is uploaded, the api.video response contains the full video response (like to the go video endpoint). This response includes the player url that is used to watch the video. Nosotros add together this value to the playerUrl
variable, and add a link on the page and then that the user tin see their video. And with that, nosotros've done information technology!
Conclusion
In this postal service, we utilise a form to have a big file from our user. To prevent whatsoever 413: file too large upload errors, we use the file.slice
API in the users' browser to interruption up the file locally. We can subsequently upload each segment until the entire file has been completely uploaded to the server. All of this is done without whatever work from the end user. No more "file besides large" mistake messages, improving the customer experience by abstracting a circuitous trouble with an invisible solution!
When building a video uploading infrastructure, information technology the states keen to know that browser APIs tin brand your chore building upload tools easy and painless for your users.
Are you using the File and Blob APIs in your upload service? Let us know how! If you'd like to try it out, your tin can create a free account and employ the Sandbox environment for your tests.
If this has helped you, get out a comment in our community forum.
Source: https://api.video/blog/tutorials/uploading-large-files-with-javascript