Hi, I hope all is well.
I am writing a canvas script in PHP Laravel that returns all values in a directory "https://my.test.instructure.com/api/v1/accounts/1/courses". I fetch the directory with a cURL request, and the goal is to look at every value from 'accounts/1/courses'.
My cURL request looks like this:
$headers = ['Authorization: Bearer ' . $token];
$curl = curl_init();
$url = "https://ucc.test.instructure.com/api/v1/accounts/1/courses"; //can also add "?per_page=1000&page=1";
CURLOPT_RETURNTRANSFER => TRUE,
CURLINFO_HEADER_OUT => TRUE,
CURLOPT_URL => $url,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_SSL_VERIFYPEER => TRUE,
CURLOPT_HTTPHEADER => $headers,
CURLOPT_CUSTOMREQUEST => 'GET',
CURLOPT_HEADER => TRUE,
I know that to display such material on a webpage, it must be in a paginated form, so I should add details to the URL like "?per_page=100&page=1". And that all works fine.
However, the script I am making is a back-end one, which executes results without showing them to a webpage. I want something I can call as part of a weekly schedule to solve that way. I'd also like to call it just once, since there will be no page buttons for any user to scroll through. In short, I want the script to go through the whole directory.
Must my script still be in paginatable form, and go through it page-by-page? (The only way I can guess to make this work in the back-end is through one long recursive loop, going from page to page) Or can I just give my cURL request the one URL, explore that, and go through all potentially tens of thousands of entries off the one call?