We've tried to isolate the issue based on our end-user's systems but there doesn't seem to be any consistent set-up. It seems random. Some are on Macs while some are on PCs. But it is consistent in that the same user can upload 5 similar files and 1 will be corrupted. If they try to upload that same file again it is corrupted again. If they ftp the file to us and we try to upload it through the same web page we can upload the file fine.
We thought it may have to do with time-outs (users with slower systems) but then it doesn't seem to make sense that it occurs on the same file when they try to re-upload just the single file.
We have had a couple of them tell us that the "select" button doesn't always pop up the window for them to select a file but that seems like a completely unrelated issue.
any thoughts or direction on this would be greatly appreciated. As a note, there is a second process that happens to the uploaded file where an encoder translates it into a new file of a different format and its possible that is what is corrupting the original but this was the same process that was running with the original RadUpload and we never encountered the issue with that and it ran for about 4 years.
We're stumped!
thanks,
Jeff
15 Answers, 1 is accepted
We will have to perform some tests and try to reproduce the issue locally. Would you please let us know if there is particular browser under which the issue is observed or it happened in different browsers?
And one more question-is there a specific file extension or size that your clients are most often uploading so we could start testing around it?
Plamen
Telerik
Here are the additional details:
1. Different browsers with some users on a mac and some on a PC. Users have reported it specifically from Safari on Mac and Firefox on Mac but we've had the same issue occur for users on PCs.
2. Doesn't seem to matter the format - it's happened to .mov, .avi and .wmv
3. Size is generally 70MB and larger.
I think it might relate to the system timing out part way through for users with slower systems. I haven't been able to find a way to replicate yet on this side to see if that might be the issue.
thanks
thanks - any guidance on this would be greatly appreciated. We're frustrated enough to want to go back to RadUpload but understand that is no longer supported.
I have been inspecting the issue but could not replicate the described behavior so far. The issue may be caused by the fact the files are being uploaded on chunks and if you perform any timeout request during the uploading the file may be corrupted. In such case I can recommend you to set the DisablePlugins="true" property and DisableChunkUpload="true" that will make the uploading of the files on one chunk and should solve this issue.
Hope this will be helpful.
Plamen
Telerik
I have tried disabling chunk uploads and disabling plugins. With this I cannot upload even small file size and progressbar doesn't appear and I have no clue what's going on. Please help. Thank you.
Be sure that you configured your application to accept larger files. In order to do that follow the steps from this article.
There is no need to register your custom handler in the web.config. However if your site is protected with authentication you will have to allow access to the handler for unauthenticated users.
Regards,
Hristo Valyavicharski
Telerik
DevCraft Q1'14 is here! Watch the online conference to see how this release solves your top-5 .NET challenges. Watch on demand now.
We're having the same problem, is there a definitive solution to this, has it been fixed in a subsequent release? I'm not sure what you mean by 'perform any timeout request' do you just mean one of the chunks times out? Shouldn't the re-combining of chunks know a chunk is missing and report an error or timeout the whole file or something?
Are there any performance implications to setting DisableChunkUpload to true?
Thanks
Glenn.
We have found the issue, but cannot find a work around. If the application pool is recycled during an upload which is much more likely the larger the file. The Chunks that are uploaded are deleted because of the Cache Dependency being removed before the file is there. The results in just the latter half of the file making it over.
Is there anyway to get around this? I was hoping to maybe set TemporaryFileExpiration to 0 and do our own clean up with a scheduled task, but a 0 expiration doesn't even allow the file to be saved from the temporary folder to the uploaded folder, it just disappears all together.
Could I perhaps do something with an upload handler to stop the item being added to the dependency cache?
Please look here: http://www.telerik.com/forums/radasyncupload-without-temporary-file#6s97rADZ7kmT0nZxf60WQA
Regards,
Hristo Valyavicharski
Telerik
DisableChunkUpload=true won't work for us. Some of our clients require uploading more than 2Gb so we couldn't do that. Just to be clear on how it works as it doesn't seem to be documented anywhere... when a file is uploaded using chunks it gets uploaded to the temp area and an entry is inserted into the Cache (HttpRuntime.Cache) that will expire after a set period (TemporaryFileExpiration/TimeToLive) removing it from the cache. When it gets removed from the cache it will execute a method that deletes the file from the temp area.
The problem with this is that if it gets removed for any other reason, e.g. an app pool recycle or the resources reach a certain limit, see...
http://blogs.msdn.com/b/tmarq/archive/2007/06/25/some-history-on-the-asp-net-cache-memory-limits.aspx
...it will get removed anyway regardless of expiring, which results in any file being currently uploaded to have what's in the temp directory deleted. This then results in a corrupt upload.
At the very least this needs to be documented so that others don't fall into the same trap and know what's happening within their web app but ideally ​people need to be able to upload files without issues. Maybe a check on the removal reason needs to be added or perhaps we should just be given the option to do our own clean up, there are files that need to be cleaned up anyway.We have potentially found a workaround to this by intercepting the incoming upload requests and adding to the cache with a null remove from cache method before the upload component has a chance to.
private
void
Application_BeginRequest(
object
sender, EventArgs e)
{
// Check it's a Telerik file upload request and that there's a file being uploaded
// If using a custom handler this would need to be the handler
if
(Request.RawUrl.Contains(
"Telerik.Web.UI.WebResource.axd?type=rau"
) && Request.Files.Count > 0)
{
try
{
// The metadata form contains all the metadata information for the file being uploaded in json format
// example json... {"TotalChunks":66,"ChunkIndex":0,"TotalFileSize":204962801,"UploadID":"1447250454431MyVideo.mp4","IsSingleChunkUpload":false}
dynamic json = JObject.Parse(Request.Form[
"metadata"
]);
// Get the upload id which will be the filename used as the temporary file
string
fileName = json.UploadID.Value;
// check whether it's in the cache and add it to make sure telerik don't get to add it, as when they add it they add
// a remove callback which will end up deleting the temporary file and we plan to do our own clean up.
if
(fileName !=
null
&& Context.Cache.Get(fileName) ==
null
)
{
string
tempPath =
string
.Format(@
"{0}\{1})"
, UploadTempPath, fileName);
Context.Cache.Add(fileName, tempPath,
null
, System.Web.Caching.Cache.NoAbsoluteExpiration,
new
TimeSpan(23, 0, 0), System.Web.Caching.CacheItemPriority.Default,
null
);
}
}
catch
{
}
}
}
I noticed that the metadata json has FileSize, TotalChunks etc. Couldn't this be used by the control to verify the uploaded file isn't corrupted or are these details not reliable? I'm assuming the UploadID is reliable? Can I also assume the remove callback method only deletes the temporary file or does it do anything else I need to account for?
Documentation please!
​
Rather than doing it in BeginRequest I think the same thing can be achieved in a custom handler constructor as such...
public
class
FileDataUploadHandler : AsyncUploadHandler
{
public
FileDataUploadHandler() :
base
()
{
HttpContext context = HttpContext.Current;
try
{
// The metadata form contains all the metadata
// information for the file being uploaded in json format
dynamic json = JObject.Parse(context.Request.Form[
"metadata"
]);
// Get the upload id which will be the filename
// used as the temporary file
string
fileName = json.UploadID.Value;
// check whether it's in the cache and add it to make
// sure telerik don't get to add it, as when they add
// it they add a remove callback which will end up deleting
// the temporary file and we plan to do our own clean up.
if
(fileName !=
null
&& context.Cache.Get(fileName) ==
null
)
{
string
tempPath =
string
.Format(@
"{0}\{1})"
, UploadTempPath, fileName);
context.Cache.Add(fileName, tempPath,
null
, System.Web.Caching.Cache.NoAbsoluteExpiration,
new
TimeSpan(23, 0, 0), System.Web.Caching.CacheItemPriority.Default,
null
);
}
}
catch
{
}
}
}
Thank you for sharing your solution of the issue.
Regards,
Plamen
Telerik