AI Item Authoring Assistant

From Blank Page to First Draft – Introducing AI-powered authoring for teachers

Company

Instructure

Role

Senior Product Designer

Website

instructure.com

Industry

Education

Date

May - July 2025

The Story

In 2025, the assessment team created its first dedicated AI team, a cross-functional group of designers, engineers, and product managers tasked with reimagining how AI could responsibly support educators.


The team’s first mission was to build a tool that proved AI could deliver real value to teachers without sacrificing trust or control. That tool became the AI question generation feature.


Instead of spending hours drafting items from scratch, teachers could type a short topic prompt such as “Photosynthesis basics” or “Causes of World War I” and instantly receive editable draft questions. It was a small feature, but an important step that showed AI was no longer just an experiment, it had become a meaningful part of Instructure’s product strategy.

The Challenge

The dual challenge was clear:


For the team: As a newly formed group, we had to align quickly, establish principles for responsible AI, and demonstrate impact to build credibility.


For the feature: We had to design a simple flow that educators would actually trust and adopt. That meant solving for:

Expectations: What do teachers type, and how specific do they need to be?

Editability: How do we signal that AI’s output is just a draft, not a finished quiz?

Trust: How do we make the process transparent so teachers feel in control?

My Role

I was the lead designer for assessment AI, responsible for shaping both the first feature and the design foundations of our new AI team. I defined the end-to-end user flow for question generation, guiding the process from the initial teacher input to the editable output. I also designed the interface patterns that made AI interactions transparent and controllable, ensuring teachers always stayed in charge of their content.


Throughout the project, I led prototype testing sessions with educators to gather insights that helped refine prompt guidance and improve the clarity of generated questions. I also worked to keep the broader organization aligned, sharing progress regularly through demos, Lunch & Learn sessions, and leadership reviews to build visibility and trust in our direction.

The Design Process

We approached this as a rapid but structured design cycle:

Discovery & Assumptions

Through interviews and surveys, we learned that teachers often faced the same barriers when creating assessments: the pressure of starting from a blank page, limited variety in question types, and a deep mistrust of “black box” AI tools.


To better understand these challenges, I mapped out the authoring pain points in detail. Teachers struggled most when starting from scratch, formatting questions consistently, balancing difficulty levels, and keeping their quizzes diverse and engaging. From this research, three key requirements became clea: the experience needed to be transparent, editable, and simple to begin with.


I also conducted competitive research to see how other platforms approached AI authoring. Many focused heavily on speed and automatic generation, but few offered real visibility or control. This lack of transparency and teacher agency became the core problem we set out to solve.

Wireframing & Prototyping

I created several prototypes to simulate how teachers would move through the input and output flow, testing different ways the system could respond. During this phase, I also explored variations in how prompts were framed to understand what level of guidance helped teachers get the most useful results.

Testing with Educators

The research revealed several clear themes. Teachers showed strong interest in the feature, recognizing its potential to save time and improve their existing workflow compared to other AI tools they had tried. However, their trust in AI was conditional. Every participant emphasized that human review remained essential. They saw AI as a helpful assistant, not a replacement.


Control also proved to be a critical factor. Teachers wanted flexibility at every step, from choosing question types and editing content to selecting sources and understanding how the AI worked. Finally, we uncovered a few usability issues. The distinction between preview and review screens, unclear labels, and the lack of inline editing often caused unnecessary confusion, pointing us toward key areas for refinement.

Iteration

Tightened visual hierarchy so teachers immediately understood “these are drafts for you to refine.”

Launch & Learn

We released the feature as a limited pilot to collect early adoption data and observe real-world usage. Since launch, we have been tracking feedback and analytics, including how many questions are edited compared to those that are accepted, to understand user behavior and guide the design of future AI features.

Success Metrics

Strategic Goals

• ≥30% reduction in time to create quizzes.

• +15% increase in quizzes created per course.

• Demo-ready beta for InstructureCon.

• Strengthen Canvas’ reputation for AI innovation.


Early MVP Outcomes

• Adoption: most teachers edited and kept at least one generated item per session.

• Feedback: teachers reported reduced “blank page anxiety” and described the tool as “a relief”.

• Limitations: usage remained narrow due to MVP scope (single source, type, DOK/outcome).

The Impact

This first release did more than help teachers write questions. It proved the value of the new AI team.


For teachers, it reduced the “blank page anxiety” that often comes with starting a new assessment. Early adopters described feeling relieved and curious to explore more. The Early Adopter Program is still ongoing, so we are continuing to collect feedback and usage data as adoption grows.


For Instructure, the project established the foundation for future AI features such as File-to-Quiz and Prompt-to-Assessment. It also helped shift the perception of AI work from isolated experiments to part of a cohesive product strategy, positioning the AI team as a trusted partner for innovation across multiple product areas.

Looking Ahead

This was the first proof point of our AI strategy. Competitor mapping and teacher research showed us where others fell short—and where we could differentiate. It set the stage for richer tools like File-to-Quiz, Prompt-to-Assessment, and a future AI assistant, all designed around one principle: no teacher starts from scratch.