The Instructure Community will enter a read-only state on November 22, 2025 as we prepare to migrate to our new Community platform in early December. Read our blog post for more info about this change.
Found this content helpful? Log in or sign up to leave a like!
Where are assignments hidden in the CD2 framework? In CD1 they appear in the verbosely named "assignments_dim".
@reynlds I am not sure if you were familiar with this resource but you can check all kinds of CD1 to CD2 mappings here: https://docs.google.com/spreadsheets/d/1kqCXAD9K45L0QeEtbuuMAFp2fW8o0oC8EBzJf58SjrY/edit#gid=5275450...
I hope it helps.
Edina
I figured it out. The "dap list" command does show the "assignments" table, but the initdb is failing on this table with this error:
$ dap initdb --table assignments
2023-05-15 09:53:06,015 - INFO - Query started with job ID: 79406107-a882-4470-b1fb-84b425124aef
2023-05-15 09:53:06,017 - INFO - Query job still in status: waiting. Checking again in 5 seconds...
2023-05-15 09:53:11,419 - INFO - Query job still in status: running. Checking again in 5 seconds...
2023-05-15 09:53:16,716 - INFO - Query job still in status: running. Checking again in 5 seconds...
2023-05-15 09:53:21,997 - INFO - Query job still in status: running. Checking again in 5 seconds...
2023-05-15 09:53:27,338 - INFO - Query job still in status: running. Checking again in 5 seconds...
2023-05-15 09:53:32,657 - INFO - Query job still in status: running. Checking again in 5 seconds...
2023-05-15 09:53:38,216 - INFO - Data has been successfully retrieved:
{"id": "79406107-a882-4470-b1fb-84b425124aef", "status": "complete", "expires_at": "2023-05-16T14:53:02Z", "objects": [{"id": "79406107-a882-4470-b1fb-84b425124aef/part-00000-756d5a9c-b0f1-4587-bfb6-01b6e37a19a9-c000.json.gz"}, {"id": "79406107-a882-4470-b1fb-84b425124aef/part-00001-756d5a9c-b0f1-4587-bfb6-01b6e37a19a9-c000.json.gz"}, {"id": "79406107-a882-4470-b1fb-84b425124aef/part-00002-756d5a9c-b0f1-4587-bfb6-01b6e37a19a9-c000.json.gz"}, {"id": "79406107-a882-4470-b1fb-84b425124aef/part-00003-756d5a9c-b0f1-4587-bfb6-01b6e37a19a9-c000.json.gz"}, {"id": "79406107-a882-4470-b1fb-84b425124aef/part-00004-756d5a9c-b0f1-4587-bfb6-01b6e37a19a9-c000.json.gz"}, {"id": "79406107-a882-4470-b1fb-84b425124aef/part-00005-756d5a9c-b0f1-4587-bfb6-01b6e37a19a9-c000.json.gz"}, {"id": "79406107-a882-4470-b1fb-84b425124aef/part-00010-756d5a9c-b0f1-4587-bfb6-01b6e37a19a9-c000.json.gz"}], "schema_version": 1, "at": "2023-05-15T13:28:01Z"}
2023-05-15 09:53:51,142 - INFO - Downloading [object 1/7 - job 79406107-a882-4470-b1fb-84b425124aef]
2023-05-15 09:53:51,699 - INFO - Downloading [object 5/7 - job 79406107-a882-4470-b1fb-84b425124aef]
2023-05-15 09:53:52,082 - ERROR - Invalid isoformat string: '-1174-11-04T22:50:36.000'
I guess I should go through a full listing and see if there are any other tables missing. I would also like to troubleshoot this issue.
I believe the invalid isoformat error is a known issue that I think (hope?) is being better handled in a future version of the dap library. You may also be able to track down the problematic record in the source data and correct it via the API. (Via the Canvas API, not the DAP API).
We had at least one table that had a record with a bad timestamp in it, and I worked around it by going into the dap code and wrapping the part that was failing in a try/except. That got me through the initdb step and since then I've been able to syncdb all of our tables without issue.
--Colin
@ColinMurtaugh which file(s) did you modify and what did you use for the "except" clause?
Unfortunately I had edited the file in situ and it's since been overwritten by a more recent install of instructure-dap-client, but IIRC I made the change to timestamp.py. If you re-run your initdb command with --loglevel debug I think you will get a stack trace that shows exactly where in that file the error happened.
In the except clause I just returned a valid datetime for some random date/time; I may have picked Jan 1, 1900 or something like that - don't really remember.
--Colin
meh...I'll wait for Instructure to push out a fix. Frustrating as this is now "production-level". I would like @Edina_Tipter to chime in on this.
@reynlds Thank you for raising this. We are investigating and triaging the issue.
Just to say, the DB loader is a best effort on our end which was implemented to help customers transition and provide a reference solution thus does not fall under strict guarantees. Nevertheless, we want to make sure that it is functioning adequately.
Community helpTo interact with Panda Bot, our automated chatbot, you need to sign up or log in:
Sign inTo interact with Panda Bot, our automated chatbot, you need to sign up or log in:
Sign in