4 Answers, 1 is accepted
I figured this out for non chunked uploads by implementing a custom provider like the code below, but am stuck on what to do for chucked uploads since I need the entire stream to resize, not just the chunk.
I found another post that said to add this in a script tag client side to disable chunk uploading, but that seems to disable multi-file select, which I need
Telerik.Web.UI.RadCloudUpload.isFileApiAvailable = function () { return false; }
using System.Collections.Specialized;
using System.IO;
using Amazon.S3.Transfer;
using ImageResizer;
using Telerik.Web.UI;
/// <
summary
>
/// Summary description for CustomAmazonS3Provider
/// </
summary
>
public class CustomAmazonS3Provider : AmazonS3Provider
{
public override void UploadFile(string keyName, NameValueCollection metaData, Stream fileStream)
{
var fileTransferUtility = new TransferUtility(AmazonS3Client);
var fileTransferUtilityRequest = new TransferUtilityUploadRequest
{
BucketName = "bucketName",
InputStream = ResizeImage(fileStream),
Key = keyName,
};
fileTransferUtility.Upload(fileTransferUtilityRequest);
}
private Stream ResizeImage(Stream fileStream)
{
var outputStream = new MemoryStream();
ImageBuilder.Current.Build(fileStream, outputStream, new ResizeSettings("maxwidth=1024&maxheight=1024"));
outputStream.Seek(0, SeekOrigin.Begin);
return outputStream;
}
}
To close this topic out (thanks to Telerik ticket), adding this javascript will prevent chunking larger files:
<
script
>
var $T = Telerik.Web.UI;
$T.RadCloudUpload.HandlerUploader.prototype._calculateChunkSize = function () {
switch (this._providerType) {
case $T.CloudUploadProviderType.Amazon:
console.log(this._uploadingEntity.file.size);
this._chunkSize = this._uploadingEntity.file.size; // used to be FIVE_MB;
break;
case $T.CloudUploadProviderType.Everlive:
this._chunkSize = this._uploadingEntity.file.size;
break;
case $T.CloudUploadProviderType.Azure:
this._chunkSize = this._uploadingEntity.file.size; // used to be TWO_MB;
break;
}
}
</
script
>
[Webpage -> S3]
as opposed to
[Webpage -> Server -> S3]
If you write your own provider to resize the image prior to upload, won't it use the latter extra stop at the server instead? Or am I missing something?
Hello Alex,
Indeed the Cloud Upload does not store the file on the server and sends it directly to the cloud.
The thing is that there are two ways to send the file, either in multiple chunks or the whole file at once. The latter approach allows you to intercept the file in the custom provider and modify it before sending it to the cloud. Here are the methods used in both cases from AmazonS3Provider.cs
Called when using chunk upload:
/// <summary>
/// Uploads current chunk of the file. It is used when the file is more than 5MB.
/// </summary>
/// <param name="config">Contains the UploadID, part number and the key name.
/// <c>
/// <para>string uploadId = config["uploadId"];</para>
/// <para>string partNumber = config["partNumber"];</para>
/// <para>string keyName = config["keyName"];</para>
/// </c>
/// </param>
/// <param name="fileStream">The content of the uploaded chunk.</param>
/// <remarks>
/// After the upload is done the ETag of the response should be assigned to the UploadedPartETag property.
/// </remarks>
public virtual void UploadChunk(NameValueCollection config, Stream fileStream)
{
Called when submitting the whole file at once:
#region ICloudStorageProvider Implementation
/// <summary>
/// Uploads the file with single request. It is called when the file is less than 5MB or the file is uploaded under IE9,8,7 where chunk upload is not supported.
/// </summary>
/// <param name="keyName">Unique name under which the file will be uploaded to the storage. This avoids file replacement.</param>
/// <param name="metaData">Meta data associated with the current upload.</param>
/// <param name="fileStream">The content of the uploaded file.</param>
public virtual void UploadFile(string keyName, NameValueCollection metaData, Stream fileStream)
{
That is why B shared his implementation of the custom provider and explained it only works when the chunking is disabled.
Regards,
Peter Milchev
Progress Telerik
Our thoughts here at Progress are with those affected by the outbreak.