The Instructure Community will enter a read-only state on November 22, 2025 as we prepare to migrate to our new Community platform in early December. Read our blog post for more info about this change.
Found this content helpful? Log in or sign up to leave a like!
We're having issues with the SQL generated by the DAP. I was on the 1.3.1, and now using 1.4.0, to see if that might resolve the issue.
I'm using the python dap module to sync/init and using the MS SQL Dialect for our canvas data 2 tables and I noticed its having issues executing ALTERS and CREATES. This is keeping sync or any init to occur and is keeping me from updating my data.
Below is the generated sql its getting an error from:
CREATE TABLE "canvas"."groups" (
"id" bigint NOT NULL,
"name" varchar(255),
"deleted_at" datetime2,
"storage_quota" bigint,
"lti_context_id" varchar(255),
"created_at" datetime2 NOT NULL,
"updated_at" datetime2 NOT NULL,
"workflow_state" integer NOT NULL,
"account_id" bigint NOT NULL,
"sis_batch_id" bigint,
"context_id" bigint NOT NULL,
"context_type" integer NOT NULL,
"migration_id" varchar(255),
"group_category_id" bigint,
"sis_source_id" varchar(255),
"is_public" bit,
"wiki_id" bigint,
"max_membership" integer,
"join_level" varchar(255),
"avatar_attachment_id" bigint,
"leader_id" bigint,
"description" varchar(max),
"uuid" varchar(255) NOT NULL,
"default_view" integer,
"non_collaborative" bit CONSTRAINT "df_non_collaborative" DEFAULT FALSE,
CONSTRAINT "pk_canvas_groups" PRIMARY KEY ("id")
);
ALSO:
ALTER TABLE "canvas"."assignment_groups" ALTER COLUMN "group_weight" double precision;
ALTER TABLE "canvas"."assignments" ALTER COLUMN "points_possible" double precision;
ALTER TABLE "canvas"."content_migrations" ALTER COLUMN "progress" double precision;
ALTER TABLE "canvas"."content_tags" ALTER COLUMN "mastery_score" double precision;
ALTER TABLE "canvas"."grading_periods" ALTER COLUMN "weight" double precision;
ALTER TABLE "canvas"."group_categories" ADD "non_collaborative" bit CONSTRAINT "df_non_collaborative" DEFAULT FALSE;
ALTER TABLE "canvas"."groups" ADD "non_collaborative" bit CONSTRAINT "df_non_collaborative" DEFAULT FALSE;
ALTER TABLE "canvas"."learning_outcome_question_results" ALTER COLUMN "score" double precision;
ALTER TABLE "canvas"."learning_outcome_question_results" ALTER COLUMN "possible" double precision;
ALTER TABLE "canvas"."learning_outcome_question_results" ALTER COLUMN "original_score" double precision;
The error comes from these lines:
ALTER TABLE "canvas"."groups" ADD "non_collaborative" bit CONSTRAINT "df_non_collaborative" DEFAULT FALSE;
"non_collaborative" bit CONSTRAINT "df_non_collaborative" DEFAULT FALSE,
The issue comes from the default value for bits in sql server has to be 0 or 1, can't be FALSE without the quotes. Can someone confirm this issue and let me know if its being worked on?
Solved! Go to Solution.
Hello everyone!
Thank you for the patience, we have managed to hunt down the bug and managed to fix it in one of the dependency libraries that is used by the DAP CLI.
We will not release a bugfix version for DAP CLI but you can run
pip install -U pysqlsync
to fix the issue.
Once this package has been updated, your issue should be fixed, initdb and syncdb should work seamlessly for MSSQL for such use cases.
Hi @jasonmiciano – thanks for posting this! I've been working through the same issue for the past couple of days and really appreciate your insights.
I'm also using DAP 1.4.0 with MSSQL Server and have encountered similar problems, especially with the `groups` and `group_categories` tables.
In my case, around **44 tables** consistently fail when running `dap syncdb --tables all`. Oddly, many of these will sync successfully when I run `dap syncdb` **individually** for each table. (They will fail if I list out the tables with the command, e.g. '--tables groups,late_policies,learning_outcome_groups' etc.
To work around this, I rewrote my Python script to:
1. Attempt to sync all tables by namespace;
2. Capture any failed tables into a dictionary (tables by namespace);
3. Loop through those failed tables and run `dap syncdb` on them individually.
This significantly improved the table sync success rate — though it's clearly a workaround and not a long-term solution.
Here’s a list of tables that have been regularily failing for me by namespace:
{
"canvas": [
"group_categories",
"group_memberships",
"groups",
"late_policies",
"learning_outcome_groups",
"learning_outcome_question_results",
"learning_outcome_results",
"learning_outcomes",
"lti_line_items",
"lti_resource_links",
"lti_results",
"master_courses_child_content_tags",
"master_courses_child_subscriptions",
"master_courses_master_content_tags",
"master_courses_master_migrations",
"master_courses_master_templates",
"master_courses_migration_results",
"originality_reports",
"outcome_proficiencies",
"outcome_proficiency_ratings",
"post_policies",
"pseudonyms",
"quiz_groups",
"quiz_questions",
"quiz_submissions",
"quizzes",
"role_overrides",
"roles",
"rubric_assessments",
"rubric_associations",
"rubrics",
"score_statistics",
"scores",
"submission_comments",
"submission_versions",
"submissions",
"user_account_associations",
"users",
"web_conference_participants",
"web_conferences",
"wiki_pages",
"wikis"
],
"canvas_logs": [],
"catalog": [
"user_defined_fields",
"users"
]
}
```
I suspect the root issue may involve:
* Foreign key constraints (as you noted),
* MSSQL-specific datatype or schema translation issues,
* Or potential differences in resource handling (timeouts, transaction limits, etc.) between batch and individual syncs.
Happy to dig deeper and collaborate on identifying a more robust fix! until something can be done to make the sync more stable for MSSQL Server.
These tables failed due to issues executing ALTER TABLE statements, often involving double precision conversions or adding constraints like non_collaborative.
Tables:
group_categoriesgroup_membershipsgroupslate_policieslearning_outcome_groupslearning_outcome_question_resultslearning_outcome_resultslearning_outcomeslti_line_itemslti_resource_linkslti_resultsmaster_courses_child_content_tagsmaster_courses_child_subscriptionsmaster_courses_master_content_tagsmaster_courses_master_migrationsmaster_courses_master_templatesmaster_courses_migration_resultsoriginality_reportsoutcome_proficienciesoutcome_proficiency_ratingspost_policiespseudonymsquiz_groupsquiz_questionsquiz_submissionsquizzesrole_overridesrolesrubric_assessmentsrubric_associationsrubricsscore_statisticsscoressubmission_commentssubmission_versionssubmissionsuser_account_associationsusersweb_conference_participantsweb_conferenceswiki_pageswikisSo as a workaround, i found best to ignore the tables that will trigger a schema update ( the schema update is all bundled, if there's a related table). So what i did is remove two entires in the instructure_dap.table_sync table, the group and group categories, when i run the python sync, it avoids attempting to run the schema update that fails.
the way i also do the table sync is using this command in my script:
Hello everyone!
Thank you for the patience, we have managed to hunt down the bug and managed to fix it in one of the dependency libraries that is used by the DAP CLI.
We will not release a bugfix version for DAP CLI but you can run
pip install -U pysqlsync
to fix the issue.
Once this package has been updated, your issue should be fixed, initdb and syncdb should work seamlessly for MSSQL for such use cases.
@sgergely Thank you for your work on this. The update to pysqlsync 0.8.3 seems to have done the trick!
Community helpTo interact with Panda Bot, our automated chatbot, you need to sign up or log in:
Sign inTo interact with Panda Bot, our automated chatbot, you need to sign up or log in:
Sign in