Interactive and Multimedia Learning

Category: Assignment 1 – Midterm Review – Module 1&2 Blog Posts and Comments

MODULE 2: Exploring Generative AI

AI generated image with the prompt “a cat sitting by the window watching the aurora with a city backdrop”.

Exploring Craiyon

This week, I explored Craiyon, a Generative AI tool for image creation. Using Craiyon, I noticed that the tool effectively generates visually engaging images from text prompts with many different variations of images to choose from. The outputs would sometimes be abstract or the quality of the image were not that great. However, It was a fun experience seeing how it can so easily generate an image from a prompt.


A SAMR analysis on Craiyon done by ChatGPT

An analysis done for chatGPT with the prompt “give me a SAMR analysis on the generative AI tool Craiyon” :

Here’s a concise SAMR analysis of Craiyon, the Generative AI tool:

SAMR Analysis of Craiyon

  1. Substitution:
  • Description: Craiyon replaces traditional methods of image creation (e.g., drawing, painting, or using graphic design software) with a text-based input system. Users can generate images based solely on textual prompts.
  • Example: Instead of creating a hand-drawn illustration, a student types a description and receives an AI-generated image in seconds.
  1. Augmentation:
  • Description: Craiyon enhances the image creation process by allowing users to quickly generate unique visuals that can support and illustrate concepts in various subjects. It can provide inspiration and diverse perspectives that a single artist might not capture.
  • Example: A teacher can input prompts related to historical events to generate relevant images, which can help students visualize and discuss the context more effectively.
  1. Modification:
  • Description: Craiyon facilitates significant redesign of tasks by enabling collaborative projects. Students can work together to create a series of images that represent a narrative or complex idea, allowing for more interactive and engaging learning experiences.
  • Example: A group project where students use Craiyon to generate different scenes from a story they are studying, which they then present as a visual summary.
  1. Redefinition:
  • Description: Craiyon enables new learning tasks that were not possible before. Students can create visual content that reflects multiple viewpoints on a topic, encouraging deeper analysis and discussion.
  • Example: An assignment where students generate images depicting various cultural interpretations of a theme, followed by a class discussion on the differences and similarities portrayed.

Conclusion

The SAMR model illustrates how Craiyon can transform learning experiences, from basic image substitution to redefining how students engage with and interpret visual information. By leveraging its capabilities, educators can foster creativity, collaboration, and critical thinking in their classrooms.


Reflection on Generative AI

I found Craiyon to be helpful for quickly visualizing concepts based on my prompts. The analysis above indicates that Craiyon can effectively enhance creative tasks through Substitution and Augmentation, while also transforming them via Modification and Redefinition. However, I’m not sure about the originality of the generated content (potential biases in its training data). As for ChatGPT, it seems like valuable resource for deepening my understanding of complex topics.

For effectively integrating generative AI into educational settings, establishing guidelines and promoting ethical use and accessibility would be important. So this would be things like developing comprehensive training programs for educators and students. These tools can create a more inclusive learning environment by helping those with diverse learning styles. It could, however, exclude those that do not have access to technology.

Moving forward, I plan to use generative AI tools primarily for brainstorming and visualizing concepts, and avoid relying on them for critical decision-making or complex analyses. As for the next 2-3 years, I anticipate that generative AI tools will become even more advanced, with improved accuracy and more user-friendly interfaces.

MLA CITATION:

Craiyon: Craiyon. “Craiyon: A Free AI Model for Generating Images from Text.” Craiyon, 2024, www.craiyon.com. Accessed 12 Oct. 2024.

ChatGPT: OpenAI. “ChatGPT: Optimizing Language Models for Dialogue.” OpenAI, 2023, www.openai.com/chatgpt. Accessed 12 Oct. 2024.

MODULE 1: How do we learn?

Adobe Illustrator: The Basics

I’ve been using Adobe Illustrator since the beginning of the year and found it fun and rewarding. I wanted to share my enthusiasm and help others discover its potential for creativity and design.

I created a screencast on that covers the basics of using the tools. It’s designed to help beginners get comfortable with the interface and start creating their own designs, whether for personal projects or professional work.

Using Mayer’s Cognitive Theory

While recording the screencast, I kept modality in mind. Rather than overloading the screen with text, I opted for narration (voice principle) to explain concepts while demonstrating the actions on the screen. This is also supported by Dual Coding Theory, which suggests that learners process audio and visual information in separate channels. By narrating while showing how to use a tool, I hoped to activate both channels and create a richer learning experience.

I also made use of pretraining (through PowerPoint slides) to introduce key concepts before diving into the application. In the screencast, I explained terms like “path” and “anchor point” early on, ensuring that learners had a solid understanding of the basics.

Cognitive Load Theory

One of the main principles I used was Cognitive Load Theory. Since Illustrator has so many features, I focused on reducing extraneous cognitive load by focusing on a select few. Rather than covering too many tools at once, I broke the content into smaller, manageable sections. This allowed learners to focus on one concept at a time and to retain the material effectively.

Segmenting Principle

I used the segmenting principle when teaching the Pen tool, I started with the basics—creating straight lines—before moving on to curves and complex shapes. This helped manage intrinsic cognitive load, making the tutorial easier to follow by gradually increasing the complexity.

What Surprised Me

One aspect that surprised me from the module was how research shows that adding your own image to the screen during a presentation doesn’t actually enhance learning and may even distract from the content (the image principle). I’ll keep this in mind when deciding whether to include my image in future screencasts and focus more on the quality of my narration and demonstrations instead.

Final Thoughts

Reflecting on this module, I’ve gained a better understanding of how people learn. The CTML principles have refined my approach to designing screencasts, helping me create more effective multimedia experiences. Moving forward, I will focus on reducing extraneous load, balancing dual coding with redundancy, and personalizing content to keep learners engaged.

© 2024 EDCI 337

Theme by Anders NorenUp ↑