AI Item Authoring Assistant

From Blank Page to First Draft – Introducing AI-powered authoring for teachers

Company

Instructure

Role

Senior Product Designer

Website

instructure.com

Industry

Education

Date

May - July 2025

The Story

In 2025, the assessment team created its first dedicated AI team—a cross-functional group of designers, engineers, and PMs tasked with reimagining how AI could responsibly support educators.

The team’s first mission was to build a tool that proved AI could deliver real value to teachers without sacrificing trust or control. That tool became the AI question generation.

Instead of spending hours drafting items from scratch, teachers could type a short topic prompt—“Photosynthesis basics,” “Causes of World War I”—and instantly receive editable draft questions. It was a small feature, but a big step: showing that AI wasn’t just an experiment anymore, but a part of Instructure’s product strategy.

The Challenge

The dual challenge was clear:


For the team: As a newly formed group, we had to align quickly, establish principles for responsible AI, and demonstrate impact to build credibility.


For the feature: We had to design a simple flow that educators would actually trust and adopt. That meant solving for:

Expectations — What do teachers type, and how specific do they need to be?

Editability — How do we signal that AI’s output is just a draft, not a finished quiz?

Trust — How do we make the process transparent so teachers feel in control?

My Role

I was the lead designer for assessment AI, responsible for shaping both the first feature and the design foundations of the new AI team. My contributions included:

• Defining the end-to-end user flow for question generation, from input to editable output.

• Designing UI patterns for AI transparency and control, ensuring teachers always remained in charge.

• Leading prototype testing with educators, gathering insights that shaped prompt guidance and output clarity.

• Driving cross-team visibility, sharing progress in demos, Lunch & Learn sessions, and leadership reviews.

The Design Process

We approached this as a rapid but structured design cycle:

Discovery & Assumptions


  • Teacher research: Interviews and surveys highlighted the biggest barriers: blank page anxiety, lack of variety, mistrust of “black box” AI.

  • Mapped authoring pain points: starting from scratch, formatting, balancing difficulty, lack of variety.

  • Identified key requirements: transparency, editability, simple entry.

  • Competitive mapping: Benchmarked how other platforms approached AI authoring. This revealed that many focused on speed and auto-generation, but most lacked transparency and teacher agency—gaps we aimed to address.

Wireframing & Prototyping


• Created prototypes simulating the input/output flow.

• Explored variations in how prompts were framed.

Testing with Educators

• Conducted moderated sessions with teachers across K–12 and higher ed.

• Observed recurring pain points:

• “I don’t know what to type” → led to example prompts.

• “These look good but I’d like more variety” → led to regenerate/difficulty controls.

• “I wouldn’t trust this without editing” → validated the editable draft-first approach.

Iteration


• Tightened visual hierarchy so teachers immediately understood “these are drafts for you to refine.”

Launch & Learn


• Rolled out as a limited pilot to gather adoption data.

• Currently using feedback loops and analytics (e.g., % of questions edited vs. accepted) to shape future AI features.

Success Metrics

Strategic Goals

• ≥30% reduction in time to create quizzes.

• +15% increase in quizzes created per course.

• Demo-ready beta for InstructureCon.

• Strengthen Canvas’ reputation for AI innovation.


Early MVP Outcomes

• Adoption: most teachers edited and kept at least one generated item per session.

• Feedback: teachers reported reduced “blank page anxiety” and described the tool as “a relief”.

• Limitations: usage remained narrow due to MVP scope (single source, type, DOK/outcome).

The Impact

This first release did more than help teachers write questions—it proved the value of the new AI team.


For teachers:

• Reduced “blank page anxiety” when starting assessments.

• Early adopters described feeling “relieved” and “curious to try more.”
• Early Adopter Program is still going on, no clear numbers yet on usage


For Instructure:

• Established a foundation for future AI features like file-to-quiz and prompt-to-assessment.

• Shifted perception from isolated prototypes to a cohesive AI strategy.

• Positioned the AI team as a trusted partner for innovation across product areas.

Looking Ahead

This was the first proof point of our AI strategy. Competitor mapping and teacher research showed us where others fell short—and where we could differentiate. It set the stage for richer tools like File-to-Quiz, Prompt-to-Assessment, and a future AI assistant, all designed around one principle: no teacher starts from scratch.

Let's talk

hello@dorailles.com

LinkHere