top of page

Remix Design Approach

Updated: Jan 8

For the Remix Design Approach activity, I began by creating an original design from scratch. This initial design reflected my understanding of different design models and methods, such as ADDIE, SAM, and ASSURE, which led to my development of my own unique design approach. In this blog, I will share my process for creating the first video, which focused on presenting my design concept, and then reflect on the development of a second video produced using AI software with minimal edits. I will also discuss my experience working with both methods, noting the differences, challenges, and opportunities of each approach.


Original Design Process

Watch the short video below to view my first creation, which was built using Canva and Audacity, and without the use of AI.



My first step was to create an outline of the key points I wanted to cover. Using Google Docs, I drafted an outline for each scene, including notes on the design elements I planned to use, and added a script to assist me when I recorded my voiceover for the video.


Once the outline and script were complete, I recorded my narration in Audacity, a program that allowed me to reduce background noise and improve overall sound quality. After completing the voiceover, I exported the file as an MP3 and uploaded it into Canva to incorporate into my video. Having the audio from the start assisted me in knowing how long each scene should be, and when certain visuals, text, or graphics should appear.

Image of Audacity Recording
Image of Audacity Recording

I was inspired by collage video styles, which brings together photos, graphics, shapes, and other media to create the video. I used a mixture of videos, icons, graphics, and animations throughout the video, keeping text minimal and images being the focus. I added transitions in-between each slide to ensure the video flowed, and included music at a low volume to keep the video engaging but not loud enough to be distracting.

Image from original video showing graphic overlay with live video
Image from original video showing graphic overlay with live video

After all final edits were complete, I exported the video from Canva as a MP4 file and uploaded it directly to MSU's MediaSpace page, which allowed me to add captions for improved accessibility.

AI Video Generation


The next stage of this activity involved using AI software to generate a secondary video that aligned with the content of my original video. To do this, I created a set of specific prompts that included key content and structure from my first video. These prompts were then entered into Synthesia, an AI-powered video creation platform, which produced a new version of the video. The result is shown below: an AI-generated video created entirely through Synthesia.


AI Video Prompt

Synthesia provides several options for creating AI-generated videos, such as uploading a file, adding a script, or giving specific prompts. For this exercise, I used the text box to enter specific prompts, incorporating different parts of my original script. The prompt I entered was as follows:


Create a 6 scene, instructional video that is 2-3 minutes in length, title “My Remix Design Approach.” Use a friendly voice throughout. Scene 1 is the introduction to my remix design approach, created by Katie Peterson. Scene 2, Provide a background of the models I explore, including ADDIE, SAM, and ASSURE, which offer agility and flexibility, and used feedback throughout. I needed an approach that is collaborative and adaptable.  Scene 3 focuses on collaboration being at the core of the design process. Include visuals of collaboration and mention using Quality Matters standards as a guide, ensuring learning objectives, assessment, materials, course technology, and accessibility all align.  Scene 4 explores the importance of brainstorming and embedding evaluation throughout the design process to pivot and adapt methods to better fit the problem.  Scene 5 breaks down the Responsive Design Approach. Show visuals and narration of the following 6 steps: - Step 1: Analyze – Define the core problem and assess what learners already know, drawing from ADDIE’s foundation. - Step 2: Method – Select the best methods and media, encouraging new and innovative approaches while meeting QM accessibility standards. - Step 3: Create – Build an outline or wireframe, then evaluate its relevance to the learner and alignment with QM principles. - Step 4: Develop – Move to a high-fidelity design based on feedback, refining creative elements. - Step 5: Test – Pilot with a small group, gathering feedback on learner engagement and overall experience. - Step 6: Launch – Deliver the final product, with an open invitation for further feedback as the evaluation process is never ending. Scene 6 is a closing scene and states The Responsive Design Approach combines structure with agility, while keeping collaboration, standards, feedback, and adaptability at the center. Include a thank you to the viewer for watching.

After entering the prompts, I asked Synthesia to generate a video. Synthesia created seven scenes instead of six, automatically selecting the character, voice, video scenes, and backgrounds. Overall, the video content aligned closely with my prompts, though I did need to make a few adjustments to the narration and the text that appeared in some of the scenes.


One of the strengths of Synthesia is how straightforward editing can be. Narration appears directly below each scene, similar to working in a Word document, and characters or voices can easily be changed if preferred. Since I was to make minimal edits to this video, I only focused on the narration and text that appeared in the scenes to keep the video consistent with my original version.


Below is a screenshot of the editing options in Synthesia. On the left, you can see the list of scenes, while the center displays the selected scene with editable text just below it. Additional options are available to change colors, background media, music, and transitions. When a character is selected, even more customization appears, including choices for voice, character style, and animations.

Image of the scene editor in Synthesia
Image of the scene editor in Synthesia

Challenges of Both

Let’s compare the two. Canva offers a wide range of templates, graphics, videos, and other elements that make it easy to create unique, fully customized videos. Uploading external content, such as the MP3 narration file, is easy, and editing tools like cutting scenes, moving audio, and adding animations or transitions are very user-friendly. However, creating a video entirely from scratch without AI is fairly time-consuming. Small changes, like adjusting the length of one scene, created a domino effect, that required me to re-edit other scenes and timing of text and animations. I also found myself occasionally stuck on how I wanted certain scenes to look, so going into creating a video in Canva with a solid outline is really important.


Synthesia, on the other hand, was also easy to use. Simply enter prompts into a textbox and let the AI do the rest. It responded well to my instructions, but I learned that being as specific as possible about each scene and narration is necessary. When my prompts were vague, I had to spend more time making edits. Another limitation is its style. Videos are restricted to a handful of avatars to choose from, often paired with office-style backgrounds. You also don’t know exactly how the avatar will look when speaking until the video is fully generated. With the free version, there are additional restrictions, such as limited monthly minutes and fewer character options, so it's not really possible to keep generating videos just to see what the avatar will look like.


Overall, both tools have their pros and cons. Canva is ideal when you want creativity and full customization, but it requires a lot more time and preparation. Synthesia is great for producing quick, professional-looking informational videos, such as training content, but has less flexibility in style and design.

Comments


bottom of page