Register for InstructureCon25 • Passes include access to all sessions, the expo hall, entertainment and networking events, meals, and extraterrestrial encounters.
Found this content helpful? Log in or sign up to leave a like!
The title kinda covers the question: how can I avoid sporadic 504 errors when conducting extensive API calls?
So, the basic situation is that I'm trying to generate a report of all instances of the appearance of a string withing the content of all courses within an account. Yeah, that in itself sounds like a lot, and it is. The calls are synchronous, though I intend to revise that once I get it working 100%, so there's not an issue of a call interfering with another.
Having said that, I am receiving inconsistent 504 errors, and I can't determine the reason for it. I know the connections are still good, so I can only assume it to be a glitch with the call handling, whether it be on my end or Instructure's, I'm not sure. As far as I can tell, every call I make is 100% valid as they do work, standalone or within a bulk queue. That the same code works individually and in groups leads me to believe that the issue isn't being caused by my code, after all, it that were the case the errors would be consistent when running with identical values, and they're not.
Searching around, I found plenty of people have had issues with making API calls for various reasons, but I couldn't find anything specifically for 504 Timeouts.
Has anyone encountered these with the Canvas API before? If so, how do you handle them?
My current solution is to simply resubmit the call with the header reports the error, but I'd rather avoid having to bloat the call queue any further. A single search of our Master shells took 15,486 calls (plus an additional 19 due to the timeouts) and 128 minutes to execute. The concern is that the account is constantly growing and those numbers are already high..
When inserting a lot of quiz questions into the test instance via the API I am seeing sporadic HTTP Error 500 and Error 404 errors. (See Timeout when loading a large quiz to edit it ). I use exponential backoffs with increases of 2 seconds per try and limit each API request to 3 tries and still see some persistent failures.
Yes, limiting the attempts would help to mitigate the runtime, but it would also produce incomplete reports. Personally, I'd rather have all the data and it take 30 minutes longer to run the first time, than to have to rerun a 2 hours report.
I suppose a more robust logging system would allow for these failed calls to be documented and reran without having to rerun the entire report, but if it's part of a complex report requiring multiple calls paired together, this may prove significantly more troublesome than beneficial.
Christoper: I agree that for a production activity, I would not want to have to record which requests failed and try to redo them. Hence when I apply the code to the production instance, I will be much more persistent - unless I am going to do the whole thing over again. Alternatively, I have considered running the program against a private instance and then moving the QTI.zip file, but this seems to be the wrong path.
In your case, there really aren't any good options due to the significant limitations of the Quizzes API. When we did our bulk migration from Moodle to Canvas, we wrote a script to build QTI files off our question banks and import those. Then, we'd manually go in and rebuild the quizzes, since the API prevented any work between banks and quizzes.
Sure, we could've loaded the questions directly to the quizzes, but that would put all questions in the blanket Unfiled Questions bank, which makes the whole banking system pretty much useless.
I've been taking the questions from an earlier instances of a course in one LMS (not Moodle) and injecting them as questions into a working course (which just serves as a holding place). I export the QTI file from this course and the instructor imports it into their course.
Along the way I take care of making multiple instances of question when the questions have random variables (as the earlier system had several different methods of introducing random variables that different instructors used). I put the multiple questions into a question group. This means that later the instructor can create multiple sections in the course for a quiz and have each "section" have a different quiz (with each quiz picking out a different question from the question group). An early version of the program handled one for of random variables - where the variable was simply used to selected the Nth set of values in answer vector(s). The next version handles a second type of random variables with ranges of answers (based upon questions with both correct and incorrect ranges of values and appropriate comments for each range).
Have you made your Moodle to Canvas quiz script available? I know some instructors who need to move a lot of questions from their Moodle system to Canvas.
I'm afraid I couldn't say. It wasn't a script I wrote, though I did help. I know it had been mentioned a few times in the past, but I think it was decided not to because, while it allowed us to bulk move our courses, there was a significant amount of customized code for handling our content in particular. Further, due to the massively diverse nature of Moodle instances, it would only really be good for a select group of instances, if any. Many of our addons were customized to meet our requirements, making them no long compatible with the widely available versions.
No problem! I too have specialized my scripts based upon what the actual content of quizzes is. For example, while I can evaluate expressions as the value for an answer (or bound on an answer), I only do so for those functions that have actually been used in javascripts in the former LMS. I'm taking advantage of the fact that while there >145k javascripts, there are only slightly more that 8000 that are actually unique, hence I am heavily exploiting common patterns in the javascript. [However, parsing has been complicated because of the mixture of both periods and commas as decimal indicators in numbers, sometimes even in a set of answers for a single problem.]
In 275 successful requests to insert questions into the Canvas test instance there were 32 persistent error HTTP 500 errors and a total of 224 HTTP 500 errors. This is with up to 7 attempts and an binary exponential backoff (with 2^tries seconds between attempts), so by the 7th attempt there is more than a minute between requests.
Hi Christopher --
I think that error handling is just going to be a fact of life here (and with APIs in general). To me, 19 retries out of over 15k calls seems pretty good! 🙂
It might be worth considering another approach for getting at least some of this data: I know that you can get the full contents of all wiki pages via Canvas Data, and that would be much more efficient than pulling it all down via API calls. You should be able to get the full text content of discussions, announcements, and conversations via Canvas Data, too. I suspect that you can get all quiz text content there as well. If you're interested in learning more, there's a Canvas Data section in the Admin Guide, and there's a Big Data group here in the Community.
I recently ran some queries against all of our wiki page content to find courses that were using the now deprecated accordion javascript, and IIRC it took less than 5 minutes to search through over 184k pages. (To be fair, it probably took me 20 minutes to pull down all of our current Canvas Data extracts and import them into my database first.)
--Colin
The problem with relying on Canvas Data is that it requires special access and is not usable for most users. We have PHP scripts and JavaScript userscripts that rely upon the credentials of the user. Of our team, I'm the only Canvas administrator and even I don't have access to Canvas Data. I'm certain I could get access, but there's no way we would get approval to give such access to everyone, separation of duties and all that.
The good thing about using the API is that it's limited by their regular permissions, so they can't do anything with it they wouldn't otherwise be allowed to do. My concern regarding this was purely one with regards to the execution length being increased by the timeout length. A successful call may take half a second, but a timeout can take, if I remember my cURL correctly, 5 minutes per attempt by default (don't quote me on this value, it's be a while since I learned cURL). This can easily bloat the runtime to an extreme.
Sadly, I've not come up with any options to avoid 504s, so the best I can do is to have the logic to reattempt the calls when it fails.
Yeah - you definitely need to carefully control access to the raw Canvas Data, but the general approach that I've seen is to have a central/administrative group use it to produce reports that can be shared more broadly.
--Colin
I completely agree, we're just not that group. The problem with such a setup is that those reports tend to be static. Most have to be due to the need to map relationships between data to provide meaningful results. My team develops our courses, we don't administrate them, but there are generatable reports that we configure "on-the-fly" that are useful only to us (like finding an out-dated link in all of our shells).
It's really that grey area where there's a potential solution in that one person would be granted access to Canvas Data for the report generation and we setup our own custom reports, however, the benefit of such a setup is offset by the amount of work placed on the individual setting up and running those reports.
It's a two-edged sword either way. One takes time to run the report which the user can pass by working on other things, while the other places the workload of report management on a single individual and prevents them from completely other tasks while working on reports.
Colin: Since the application that I am working on is so that I or another instructor can put data into Canvas, using Canvas Data is infeasible. While Canvas Data might be useful for analytics and generating reports, I do not think it will work well when instructors are trying to inject content.
I also agree with Christopher, that having the API limited by the instructor's access permissions is a wonderful feature - with ideally anything the instructor can do via the GUI do able via the API.
To interact with Panda Bot in the Instructure Community, you need to sign up or log in:
Sign In