1. Reflect on the relationship between labels and images in a machine learning image classification dataset. Who has the power to label images and how do those labels and machine learning models trained on them impact society?

Creating and using image datasets for AI is about more than just teaching computers to recognize pictures. It's also about putting society's values, biases, and beliefs into AI systems. The way images are labeled can have a big impact on how AI understands and interprets the world.

People who create image datasets for AI, like researchers or online workers, decide what labels to give each picture. Their personal background, including their culture and beliefs, affects how they label images. When they use mean or unfair labels for pictures of people, it can hurt those people's feelings and spread wrong ideas about them. The people who label these images have a lot of power. Their choices can affect how AI systems work and the decisions they make. This is important because these choices can sometimes include unfair biases or wrong ideas about certain groups of people.

When AI models learn from these labeled datasets, the effects reach beyond just technology and into society. These AI systems are now used in important areas like hiring, policing, and healthcare. If the datasets have biases or wrong ideas, the AI models will likely copy and increase these biases. It's very important to understand how powerful image labeling can be and the problems it might cause. By recognizing these issues, we can work towards creating AI systems that are fairer for everyone. This awareness is key to developing AI that helps society in a just and balanced way.

  1. Train your own image classifier using Teachable Machine and apply the model to an interactive p5.js sketch. Feel free to try sound instead of or in addition to images

https://editor.p5js.org/yg2348/sketches/9tNIgYVFS