The Instructure Community will enter a read-only state on November 22, 2025 as we prepare to migrate to our new Community platform in early December. Read our blog post for more info about this change.
Found this content helpful? Log in or sign up to leave a like!
Hi,
What would be the most efficient way to get the list of students (users with enrollment type = 'StudentEnrollment') from a course object?
Thank you
Solved! Go to Solution.
What language?
List users in course - Courses - Canvas LMS REST API Documentation
// jquery
$.ajax({
url: '/api/v1/courses/:course_id/users',
method: 'GET',
data: {
enrollment_role: 'StudentEnrollment',
per_page: 100
}
}).done(function(r, x, s) { console.log(r) })
What language?
List users in course - Courses - Canvas LMS REST API Documentation
// jquery
$.ajax({
url: '/api/v1/courses/:course_id/users',
method: 'GET',
data: {
enrollment_role: 'StudentEnrollment',
per_page: 100
}
}).done(function(r, x, s) { console.log(r) })
Thank you Robert!
I'm using Python for my project.
Kevin,
I've been switching to and writing Ruby lately.
Brushed the dust of an old script and put this together.
import requests
import json
token = "11~asdfghjkl"
api = "https://x.instructure.com/api/v1/"
headers = {'Authorization' : 'Bearer ' + '%s' % token}
course_id = 123
payload = {
'enrollment_role':'StudentEnrollment',
'per_page':100
}
url = api + 'courses/'+ str(course_id) +'/users'
results = requests.get(url, params = payload, headers = headers)
response = results.text
users = json.loads(response)
print(users)
# headers in case you need to deal with pagination
headers = results.headers
print(headers)
@kj460 ,
If any of your courses have more than 100 students in them, you'll have to deal with Pagination as well. There's a long thread with lots of examples here in the Community at Handling Pagination.
I just got this enrollments call working last night with JavaScript. Given my history, that's not impressive and not worthy of mention. What is worthy, perhaps, at least to me, is that managed to get it working with the Bottleneck library so I can run concurrent requests and throttle them so I don't overload the system. After the first request is made, it recursively adds the all of the additional based on the next and last link headers and then generates all of the promises based on the page= query parameter. I read a list of active courses from a database and then run parallel requests to fetch the information. I loosely handle network failure -- I attempt one retry. I store the data into a database as it arrives by looking at the fields in the object rather than waiting until all of the data has arrived. I downloaded all assignments and assignment groups last night (I had moved on from enrollments) for 408 courses in per_page = 50 chunks in 172 seconds using 20 concurrent fetch() statements. The x-rate-limit-remaining never dropped below 551.3973 and most of that was early on when it had imposed that 50 penalty before it figures out how much it was really going to take. After the 30th call, it never dropped below 600.
I'm sure other people have been doing this for a while, especially those with large data-warehouse needs that can't be met through Canvas Data, but it was a large jump forward for me getting the data I need for Starfish -- and then hopefully being able to lighten up a little (we know that won't happen).
That's cool! I will have to check out Bottleneck.
I'm working on my first project with concurrency for Live Events. Looked into it for Node, but I'm sticking to Ruby for now. Mostly because I found a very nice and maintainable gem for handling concurrency for SQS. Will be sharing soon. Need to work in daemonization for background runtime and trying to prepackage support for MSSQL, MySQL, PostgreSQL, Oracle... community flavors I've noticed. 2 down, 2 to go.
A second read of this makes it sound like in my haste to get this written before having to leave for work that I oversold bottleneck. Bottleneck is just the limiting / throttling portion. There are other things out there for node, especially if you want to use request or request-promise, but I've been trying to stick with the node-fetch since fetch is available within the browser. All the rest of the stuff was the hard part, bottleneck just allowed me to make multiple requests without making too many simulataneous requests. By the way, bottleneck allows throttling of any group of promises, not just API calls and it isn't limited to promises, it also works with callbacks and even allows for clustering.
I admire your ability to bounce back and forth between so many different languages. Unfortunately, I'm stuck using what I know and when there's not a library for it, I end up writing my own. I could be a lot more efficient if I knew what I was doing.
Could just be my focus and fixation on concurrency and multi-threading and didn't really read into the Bottleneck doc. It still sounds like it's an effective to get through requests quick. My immediate desire was to try and test it with the /sub_accounts endpoint and see how fast I could collect 81 pages.
Reviewing our nudge program for another thread I realized I call sleep(0.4) after creating each conversation to 'be nice to the api'. From what I gather in your post I could use something like Bottleneck (or try writing it in Node), and attempt to create all the conversations within a few minutes instead of 1 at a time. I haven't really spent much time trying to understand x-rate-limit or it's temperament, but I've usually just sided on being nice.
By no means am I an expert in any of these languages and as you've stated elsewhere in context to JavaScript, I often just Google 'how to *' and then look up documentation. I am not developing elaborate programs, mostly scripts. But I find some resources are easier or more abundant for certain languages than others. That usually makes me feel like I should chose the language that has the support, or gem/module for maintenance reasons (no time for inventing wheels here), or being kind to whomever ends up supporting the code after me. The rest is curiosity, practice, or trying to understand why I make those decisions.
I am currently using two Gems in the Live Events code that were chosen because of the Github activity (forks, frequency of commits, and contributors), creator involvement, cross-platform support (multi databases) and even an IRC channel where the developer answers questions. That kinda beats having to grok the entire library of documentation before I can start making things work.
Whereas I admire your proficiency in being able to comprehend, regurgitate and share your knowledge in detail. I often have to regroup and re-understand what I'm doing before I share or as I make my replies. I'm finding that the more I share, evaluate, and explain these things the more comfortable I get with what I'm doing and it helps my learning process.
Unfortunately, I haven't been able to figure out how to get anything to monitor the x-rate stuff and adjust based off that, so I set the number of concurrent sessions. There is an option with bottleneck to update the settings, but it says that it doesn't affect scheduled jobs, so that kind of defeats the purpose for me.
Today, I downloaded 190k submissions for our active courses in less than 9 minutes with 20 concurrent calls going. That includes all submissions and the actual answers that students have given on quizzes. Those quiz answers are something that I thought didn't exist outside of the student analysis report and the quiz audit log, and have told people such. When I went down the rabbit hole to determine how long it had been there, it looks like about 3 years, but there's a note in the push comments that says something akin to "this API is intentionally undocumented."
I also re-discovered that filtering content is available in some API calls. For example, I can get a list of all assignments using the assignments API, but it includes the actual content of the assignment in the description property and that slows it down. If I use the assignment_groups API instead, I can include the assignments and exclude the descriptions and the the discussion entries and it paginates on the number of assignment groups rather than the number of assignments, so most courses can fetch all assignments in a single API call. Discovered that by watching what Canvas did when it loaded the assignments page within the web UI.
The submissions API has some similar functionality where you can exclude certain information. Unfortunately, I'm trying to find the first attempt that someone made outside of the submissions with history included and not having much luck. Turning on the history duplicates all submission attempts, including the one that was already sent as the most recent. It's also what turns on the quiz responses, but that's one of the things you can't turn off.
Moving forward, I'm working on using the submissions API filter (the exposed ones) to only retrieve submissions or graded items made after a certain date. Rather than having to download all of the 1000+ submissions for my stats class (not including multiple attempts), I downloaded only the 3 that had changed. There's still an API call for each course, well probably two of them now (one for submitted_at and one for graded_at), but it's much faster than having to download everything, even if it gets duplicated in both API calls.
Bottleneck itself is for Node and I picked it because of number of downloads (figured what people download is probably better than things people don't download) and 0 dependencies. What I finally had to do was watch for the on('idle') trigger. I was making API calls using promises and recursion when there was a link.next header. When it got into bookmarked pages, one of my classes recursed deeply enough that it hit the bottleneck maxConcurrency and stopped responding. Strangely, the one it locked on was the very last one, and I spent a bunch of time tracking down the issue thinking it was something with the last one rather than realizing it was the 20 executing requests that was the problem. Changing the per_page from 50 to 75 made the problem go away for that course, and then I rewrote it to not use recursion but just wait for bottleneck to go idle, meaning it was finished. The write to the database is inside the bottleneck limiter, so I make sure that the data is written before I reach the idle state and close the database connections. It probably would have been a lot easier to write using callbacks, but I'm trying to teach myself promises and understand them better.
I am just beginning to make 'baby-steps' toward concurrent API calls, so I am finding this discussion instructive and encouraging. I have done most of my API work using Python, however for the concurrent stuff I am working with Clojure and the core.async library (based on Communicating Sequential Processes, and borrows from the Go language). Bit of an idiomatic choice, I suppose. However the core.async library gives you some pretty powerful tools (which I am still learning). Also, I am interested in being able to interact with the my API query process as it runs. The idea being, even with the speed improvements that come with the concurrent API calls, I expect it to be a long running process that I may need to tap into and get info out of (or dynamically change) as it runs. The Clojure REPL gives me that ability. Will keep folks posted on my progress.
Mike
@nardell , I do all my API work with Python, and this recent article, Speed Up Your Python Program With Concurrency, has me considering making baby steps toward concurrent API calls. I'm curious why you chose not to do it in Python. Are those features not available with the Python options?
dgrobani , good question. First, will need to read the article you referenced (thanks!), to really answer the question of the difference of working with Python vs Clojure (with core.async) for concurrent programming. I consider myself a proficient Python programmer, but have done very little with asynchronous / concurrent processes with the language so I can't really do a good comparison. Clojure's implementation of Concurrent Sequential Processing (in the core.async library) is solid and well documented. So Clojure + core.async is a reasonable place to learn CSP (as a distinct approach to working with parallelism and concurrency). Certainly the Go language (which fundamentally embraces CSP) would be a good place to get a solid footing with these ideas, but that is not the train I am on. And I think there are CSP implementations in Python that I am eager to learn more about. On a different level, I personally find Clojure to a good learning experience that affects how I approach programming in general. It is a highly opinionated language that makes me think differently. Since this is currently a "weekend and after-hours" project I suppose I can take this enrichment oriented approach. Will let you know how it goes, since the proof is in the pudding.
If you're using Python I would suggest using the wrapper UCF created whenever you can:
GitHub - ucfopen/canvasapi: Python API wrapper for Instructure's Canvas LMS
In general that's good advice, but be careful once people start talking about using concurrency to speed up fetching large data sets. One of the strengths of the module is the way it builds objects incrementally. That's great for looping over targeted data sets, and the module is amazing at doing what it does.
The tradeoff for building the objects on-the-fly, though, is that it doesn't provide iterators for any of its lists. If you start looping over large lists, like all the users in the instance, or all the courses in the instance, or *cough* page views for veteran faculty member *cough*, you end up with objects for every item in the list of users, courses, or page views resident in memory. And if you don't have enough memory your machine thrashes or crashes or does whatever it does when it runs out of memory.
And the size of the objects can grow astonishingly quickly. (ask me how I know...)
Hi Robert,
Thanks for this.I'm trying to access json files for unique course pages within Javascript. However, I'm getting errors. I'm using xmlhttp.open("GET","api call", true); . Any help would be appreciated.
Do you mean XMLHttpRequest ? I don't remember seeing an xmlhttp in JavaScript. There's a basic example of usage at Using XMLHttpRequest - Web APIs | MDN
Your information is too generic to give much help. Without the error message and no specifics about the call you're making, there are too many things that could go wrong. The .open() doesn't even make the request until you use the .send(), so we don't know if you're including that. It might be specific to the API call you're making and having it incorrectly setup, it might be that you're not supplying the request headers that specify the authorization and the content type.
I've started using the fetch API when I don't want to use jQuery. It returns a promise, so it's thenable.
My basic setup looks something like this:
const options = {
'method': 'GET',
'headers': {
'authorization': 'Bearer ' + apiToken,
'content-type': 'application/json',
'accept': 'application/json'
},
'timeout': 5000
};
fetch(url, options)
.then(res => res.json())
.then(doSomethingWithJSONResults);There are some values that have to be defined, of course, like apiToken and url. doSomethingWithJSONResults is a function that is passed the value from the res.json() promise. You could replace it with something like .then(json => { console.log(json); }) or whatever logic you need to perform. There's probably more error checking that needs to be done in there, but that's the overview.
I'm fairly new to promises (last year or so) but it's great if you want to make calls and then do something once it's done rather than waiting around for it to finish. I noticed you had the async=true set, so you may not want to process other stuff while you wait. As mentioned in the sub-thread going on here, I'm making multiple API calls at the same time to process things faster.
Hi James,
Thanks so much for your reply. I am currently using the following code in Javascript to get Json data:
var xmlhttp = new XMLHttpRequest();
xmlhttp.open("GET", "https://x.instructure.com/api/v1/courses/54498/pages/glossary?access_token=X", true);
xmlhttp.responseType = "json";
xmlhttp.send();
xmlhttp.onload = function () {
var response = xmlhttp.response;
initparseData(response);
};
The initparseData gets the data from the json file and gets the things I want to do, done(Essentially get the data from the json file put them in an array and then compare the content within a page to replace). However, this works great with a static Json file, but calling the api doesn't work. There are no errors in the console, It just doesn't fires. Reading your answer makes me realise that my code might not be the most secure way to access the canvas api.
Canvas recommends that you don't shouldn't send the access_token as part of the URL, but it does take it.
That is a much more helpful example, thank you.
I copy/pasted your code into my browser's console so I could play with it and discovered what may be the problem. The response starts with while(1); and then it has the data you need. That is because Canvas isn't getting the 'accept: application/json' header.
If you add this line after the open and before the send(), it should fix that problem.
xmlhttp.setRequestHeader('accept','application/json');
I also moved the send() to after the onload(). I don't know if that's necessary as I don't use XMLHttpRequest myself, but I found a really useful post on StackOverflow that gave four ways to accomplish this and that's what they did. Check out the correct response in javascript - Parsing JSON from XmlHttpRequest.responseJSON - Stack Overflow . I did try it your way with just the extra line added and it worked.
No time to get into this, we are on the final stretch of moving everything out of the house for some house projects. But if you like code, without explaination, it may answer some of your questions. I have been working on this for some time as part of a tutorial for the Developer Tools for the Canvas User series. But have had to abandon it because of Jive's annoying removal of on (line|error||load).
It covers making ajax calls without jQuery, using XMLHttpRequest(), and dealing with things that Canvas does to modify jQuery for CSRF, the while(1); you are seeing and how to deal with it. Just look at the lines 5 and 7 of pb.ajax.js compared to Canvas' jquery patch file. Note the CSRF Token in your Network Tab of Developer Tools.
Careful with the parameter parser, it has an issue but the fix is at work.
It's not finished, but provides a decent example.
canvas-lms/jquery.instructure_jquery_patches.js at master · instructure/canvas-lms · GitHub
Continuing on with some explanation. @James , is correct you should not add an access token/developer key to your in browser JavaScript calls. You should instead get a copy of the CSRF Token that Canvas generates and updates upon request, and pass that with your XHR requests. This allows the user to self sign requests, instead of passing a single key for every user. This does limit the request to the users permissions, but that is recommended. Granting users access to a higher level access should be done with an LTI or providing appropriate users with increased permissions.
I wasn't sure from the comments whether Prabhnoor was doing this from Node or from within the browser. References to reading from a JSON file and comparing the content to what's on a page in order to replace it made me think it wasn't from a browser. Ultimately, I wasn't quite sure so I just didn't address it.
carroll-ccsd makes a very good point that probably can't be repeated enough. If it is running from within a browser, then you definitely do not want to use API access tokens. The modern browsers allow you to inspect JavaScript and they can likely get that API token and do bad things with it. It might be okay for a quick test of something on your own machine, but never in any way that allows someone else to have access to it.
I wasn't quite sure either, and didn't have time to tinker, but had the XHR example to share. I think I fixated on the auth issue and the while(1); you mentioned. I am curious if the while(1); shows up in Node, seems like that would only be necessary for browser based requests?
The Canvas server doesn't know whether it's node or a browser making the call. It could, but user-agents can be spoofed. Instead it relies on the accept header. I modified my accept string in Firefox to avoid the while(1); and let it properly show the JSON content.
Here is an API call to /api/v1/courses/2335978 that I made within the browser.
This is from Chrome with the default accept header
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Here is it is in Firefox with the default accept header.
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Here's the same thing in Firefox where I've modified the Accept header by opening the about:config and modifying the network.http.accept.default header.
text/html,application/xhtml+xml,application/xml,application/json;q=0.9,*/*;q=0.8
Thanks so much Robert and James! This helped alot. At the moment I had the file outside of canvas and had a static Json file for testing. Once the Javascript was working the way I wanted, I was trying to fetch the dynamic JSON data from canvas page using the access token to test whether the Javascript still works, but it wasn't. The last part will be to upload the javascript into the theme, for it to fetch data within a canvas page and update in another. Please let me know if this process sounds correct? Will I still need a CSRF token to access pages once the file has been uploaded into the canvas theme?
If you are not using the built-in Canvas flavored jQuery you will need to pass the CSRF token, collecting it from the cookie. Note, that the CSRF token is different than a developer key or auth token.
I typically try my code in the browser developer tools/console before uploading to the Theme. It's a really annoying* to trouble shoot code in the editor. So my usual process for Canvas/Theme or UserScripts is writing them in the console til they are solid, testing in a user script, and then finally uploading to the theme for sanity check.
* Maybe it's because I have over 300 sub accounts (schools) that have their own theme... that the theme editor applies Global JS too. On Production it's pretty fast, on Beta and Test it can sometimes take a few minutes and the UI responsiveness in the theme editor gets a little slow.
Are you aware of test and beta? edu.test.instructure.com
Yes, I am now. Thank you soo much! I'll start by finding the CSRF token. I'll update my answer once, I have my application successfully running. Again Thanks so much for all your help Robert and James.
There may be a way to avoid sending the CSRF unless it's a POST or PUT command that would need it. With fetch() you can set it up to automatically send the cookies so that you don't have to mess with them.
For example, when I make a call from within the browser, I do something like this:
const options = {
'credentials' : 'same-origin',
'headers' : {
'accept' : 'application/json'
}
};
fetch(url, options)
.then(doSomethingWithResults);According to the documentation: Request.credentials - Web APIs | MDN I don't even have to add the credentials line since same-origin is the default.
True, wouldn't need to send the CSRF token for GET requests with XMLHttpRequest either.
I like the Fetch API too, but am still trying to figure out what my favorite is, and why.
I've run into articles like this,
https://medium.com/@shahata/why-i-wont-be-using-fetch-api-in-my-apps-6900e6c6fe78
...which is what got me experimenting and writing pb.ajax.js · GitHub
...which, admittedly, since we write a lot of global javascript is intended to hot swap jQuery's $.ajax() method with native JS.
If I'm going to use vanilla JS to do things, I'd like to make repeatable things reusable, keeping that file as small as it can be. I'd like to handle simple ajax calls and ajax promises, without writing to much repeated code, settings, headers etc. Which ends up being its own 'toolkit', as the article mentions.
As far as promises go, I also tinkered with and wrote this simple test awhile back.
a little async/await promise test · GitHub
The sample here, uses the testing API at https://reqres.in, randomizes an endpoint and tries it, showing an example of returning some data while waiting for a response from XHR.
Smaller example
even simpler async/await ajax · GitHub
None of this is in our code base yet, and this is just tinkering, learning, understanding.
I have not gone this in-depth with the Fetch API yet, still working on tooling and understanding XHR.
Getting a small, reusable library totally makes sense in your case. Our custom JavaScript only has the dashboard course card sorter in it, so it wouldn't make sense to create an entire library for that, but I'm still using jQuery (I think) because it relies on jQuery UI and that part won't work without jQuery.
I think I might have read that article already ... and decided that for me it was worth it since fetch() was built into the browsers and was easier than xhr() stuff. Then, when I switched to Node, I found the node-fetch library that allowed me to use fetch() so I didn't have to learn something else.
The code I wrote for this project has those helper functions that allow me to reuse things. For example, here's my function to get the assignment groups.
function getAssignmentGroups(courseId) {
let query = variableSubstitution('/api/v1/courses/:course_id/assignment_groups', {
'course_id': courseId,
'include[]': 'assignments',
'exclude_response_fields[]': ['rubric', 'description'],
'override_assignment_dates': false
});
return getAPI(query);
}Obviously, that could be one statement. The stringification isn't what I would like -- I'm not jusing JSON although I probably should -- I'm using the query string. I would have just liked to had exclude_response_fields without the [] at the end, but the function I'm using to convert it for me doesn't add the [] if field is an array just numeric keys.
The variableSubstitution helper function isn't really necessary because I built it into the getAPI() function. It looks for :variable_names in the path and replaces them and removes it from the parameter list. Then it sends the rest of the parameters on to the getAPI() function.
The getAPI() function is the one that adds the headers. This particular program only needs the GET method since I'm basically building my own Canvas Data for certain tables, but with current information. Once it's prepared the url, it passes it off to another function that takes either a string or an array of strings and calls the API for each of them. That's the bottleneck portion. It also intelligently? handles recursion, going ahead and generating all of the urls for links with a page=\d+ format and one at a time for page=bookmark format. If I'm running multiple requests, the first one wasn't necessary, but if I'm testing with just a single course, it sure came in handy.
I actually used .forEach() today. It made it a lot easier -- so thank you for sharing that one and insisting on using it when I wanted to for() it. I can't break out of it, but I don't need to, either. I was originally using .map() by my linter was complaining about needing to return something and so I looked up the difference and decided that I really wanted the .forEach() in every case except one. I think I can do away with that one, too, I'm doing a promise when I don't need to now that I'm not using the recursion with promises anymore. The code is turning into hackish slop, but it's getting done and sometimes the time constraint is more important than the elegance factor. I also modified the bottleneck's maxConcurrency issues and split my fetches up into three parts. First I grab enrollments so I have current course, section, and user information. Then I fetch all the assignment groups to get the assignments. Finally, I get all the submissions. The first two could handle 40-50 concurrent sessions, but I backed down to 25 for the last one. It could probably handle more, but I had API calls taking 15 seconds to find out that there was a 104 question quiz that was part of that history.
I still don't have my error checking down, but it ran today and downloaded all of the data over a 13 minute period. Now I have to write the code to put it into the CSV format needed by Starfish. Lots of design questions there like should we exclude 0's from the class average or report the median instead, but we won't really know what we want until we have it running and find out that it doesn't work the way we want it to.
Regarding your code: in your pb.ajax.js, are you going to do different things for stati (statuses?) of 400-422?
I would throw a 'use strict' in there, unless it's already there at a higher level in the same file.
Our un-minified, compiled, and pretty printed global file is just under 500 lines, which doesn't include a half dozen scripts we load asynchronously via AWS/S3. Dashboard Card sorter and Rubric sorter are only loaded on the pages that need them and Admin Sub Account Nav is loaded for admins only. This also doesn't include the sub account js that tap into or use the global javascript.
I'm really interested in bottleneck now. There are a few endpoints I've always wanted to collect data from but was always too lazy to deal with things like the script failing, not completing, pagination, or the number of loops to deal with it, and I really hate writing long nested if statements it often seems necessary to collect from multiple endpoints. You make it sound more fun with your explanation, and possibly worth the code/time for the extra data with the increased time. Especially where it might fill gaps in CD.
pb.ajax.js
I wasn't entirely sure about the status codes. After forcing the various error codes, and reviewing questions about those responses on the community I was thinking about providing a more user friendly, canvas api flavored output. But that also seems bulky. Might not do that for myself, could do it if this was popular. Seems like a waste of bytes, lines, and could be resolved with 'library documentation'. The file is just a working experiment in common issues and preferences. The file is currently a little out of date. I improved the params function, but then found it wasn't sufficient enough still. It handles some depth arrays for most endpoints, but then I found
calendar_event[child_event_data][X][end_at] - Calendar Events - Canvas LMS REST API Documentation
Good eye, you're correct about use strict.
I don't want to start a new question for this, as it seems mine fits this discussion...
Is it possible to get a list of all students in the system without passing a course_id? I am trying to make a 'tool' that allows administrators to see if a person has a Canvas account by searching for partial name/username across all courses. I know I can loop over course_ids but that seems like a lot more calls. I am not finding an API call that would do this.
If this is something your institution needs via an API call, would it not make sense to try to look at what your provisioning report gives you from the root account level (with the users box checked)? The reports tab on the admin interface doesn't give you details on the fields that get returned so I'll list them here for convenience:
canvas_user_id, user_id, integration_id, authentication_provider_id, login_id, first_name, last_name, full_name, sortable_name, short_name, email, status, created_by_sis
You can get that information from the Users endpoint. It even accepts search terms if your users have partial names or ids of the students they’re looking for: https://canvas.instructure.com/doc/api/users.html#method.users.index
If you’re working in python the canvasapi module from ucf can handle pagination for you if you don’t mind using 3rd-party modules.
Get Outlook for iOS<https://aka.ms/o0ukef>
Thank you Jay, this is exactly what I ended up using.
Follow-up question: Any idea if it is possible to pass First and Last Name as a search_term, or wildcard character? It doesn't seem to accept a space, ex: "T Nguyen", or comma, ex: "Nguyen, T". Would I need to use the "Find Recipients" call and then pass the userID as the search_term? I haven't played with the Find Recipients call yet, but I am about to. ![]()
Just looking for a way to narrow it down if we have a lot of Nguyen's.
Thanks again!
Josh
*Search Recipients... Looks like Find Recipients is deprecated.
Community helpTo interact with Panda Bot, our automated chatbot, you need to sign up or log in:
Sign inTo interact with Panda Bot, our automated chatbot, you need to sign up or log in:
Sign in