Thanks. We convolve this output further and get an output of 7 X 7 X 40 as shown above. Whereas in case of a plain network, the training error first decreasesÂ as we train a deeper network and then starts to rapidly increase: We now have an overview of how ResNet works. Tao Wang, David J. Wu, Adam Coates and Andrew Y. Ng. Structuring Machine Learning Projects. So, if two images are of the same person, the output will be a small number, and vice versa. 3. Next, we will define the style cost function to make sure that the style of the generated image is similar to the style image. This course is part of the Deep Learning Specialization. After convolution, the output shape is a 4 X 4 matrix. Suppose we use the lth layer to define the content cost function of a neural style transfer algorithm. The max pool layer is used after each convolution layer with a filter size of 2 and a stride of 2. Tanh: It alway… Please click TOC 1.1 Welcome The courses are in this following sequence (a specialization): 1) Neural Networks and Deep Learning, 2) Improving Deep Neural Networks: Hyperparameter tuning, Regu- The equation to calculate activation using a residual block is given by: a[l+2] = g(z[l+2] + a[l]) If you want to break into AI, this Specialization will help you do so. AI for Everyone. These include the number of filters, size of filters, stride to be used, padding, etc. Suppose an image is of the size 68 X 68 X 3. Instructor: Andrew Ng, DeepLearning.ai. Generally, the layer which is neither too shallow nor too deep is chosen as the lth layer for the content cost function. •Recent resurgence: State-of-the-art technique for many applications •Artificial neural networks are not nearly as complex or intricate as the actual brain structure Based on slide by Andrew Ng 2 In neural network, there are five common activation functions: Sigmoid, Tanh, ReLU, Leaky ReLU, and Exponential LU. Thus, the cost function can be defined as follows: JContent(C,G) = Â½ * || a[l](C) – a[l](G) ||2. Instead of choosing what filter size to use, or whether to use convolution layer or pooling layer, inception uses all of them and stacks all the outputs: A good question to ask here – why are we using all these filters instead of using just a single filter size, say 5 X 5? That’s the first test and there really is no point in moving forward if our model fails here. Before diving deeper into neural style transfer, letâs first visually understand what the deeper layers of a ConvNet are really doing. If you are looking for a job in AI, after this course you will also be able to answer basic interview questions. We will help you become good at Deep Learning. Amazing course, the lecturer breaks makes it very simple and quizzes, assignments were very helpful to ensure your understanding of the content. We have learned a lot about CNNs in this article (far more than I did in any one place!). I’ve taken Andrew Ng’s “Machine Learning” course prior to my “Deep Learning Specialization”. When our model gets a new image, it has to match the input image with all the images available in the database and return an ID. For the sake of this article, we will be denoting the content image as âCâ, the style image as âSâ and the generated image as âGâ. How do we overcome this? You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. The class of the image will not change in this case. If yes, feel free to share your experience with me – it always helps to learn from each other. Below are the steps for generating the image using the content and style images: Suppose the content and style images we have are: First, we initialize the generated image: After applying gradient descent and updating G multiple times, we get something like this: Not bad! You will master not only the theory, but also see how it is applied in industry. Once we pass it through a combination of convolution and pooling layers, the output will be passed through fully connected layers and classified into corresponding classes. This is one layer of a convolutional network. || f(A) – f(P) ||2 Â <= || f(A) – f(N) ||2 AI is transforming multiple industries. This is how a typical convolutional network looks like: We take an input image (size = 39 X 39 X 3 in our case), convolve it with 10 filters of size 3 X 3, and take the stride as 1 and no padding. Convolutional Neural Networks 5. The intuition behind this is that a feature detector, which is helpful in one part of the image, is probably also useful in another part of the image. We will discuss the popular YOLO algorithm and different techniques used in YOLO for object detection, Finally, in module 4, we will briefly discuss how face recognition and neural style transfer work. (and their Resources), Introductory guide on Linear Programming for (aspiring) data scientists, 6 Easy Steps to Learn Naive Bayes Algorithm with codes in Python and R, 30 Questions to test a data scientist on K-Nearest Neighbors (kNN) Algorithm, 16 Key Questions You Should Answer Before Transitioning into Data Science. So a single filter is convolved over the entire input and hence the parameters are shared. This will give us an output of 37 X 37 X 10. The second advantage of convolution is the sparsity of connections. Letâs look at an example: The dimensions above represent the height, width and channels in the input and filter. Originally, Neural Network is an algorithm inspired by human brain that tries to mimic a human brain. It essentially depends on the filter size. The total number of parameters in LeNet-5 are: An illustrated summary of AlexNet is given below: This network is similar to LeNet-5 with just more convolution and pooling layers: The underlying idea behind VGG-16 was to use a much simpler network where the focus is on having convolution layers that have 3 X 3 filters with a stride of 1 (and always using the same padding). - Know how to implement efficient (vectorized) neural networks Awesome, isnât it? In this course, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. The great thing about this course is the programming neural network while reading the concepts from the scratch. Adam Coates and Andrew Y. Ng. Your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile. Applied Machine Learning – Beginner to Professional, Natural Language Processing (NLP) Using Python, 40 Questions to test a Data Scientist on Clustering Techniques (Skill test Solution), 45 Questions to test a data scientist on basics of Deep Learning (along with solution), Commonly used Machine Learning Algorithms (with Python and R Codes), 40 Questions to test a data scientist on Machine Learning [Solution: SkillPower â Machine Learning, DataFest 2017], Top 13 Python Libraries Every Data science Aspirant Must know! Do share your throughts with me regarding what you learned from this article. Now that we have understood how different ConvNets work, it’s important to gain a practical perspective around all of this. Letâs have a look at the summary of notations for a convolution layer: Letâs combine all the concepts we have learned so far and look at a convolutional network example. If you want to break into cutting-edge AI, this course will help you do so. So, the last layer will be a fully connected layer having, say 128 neurons: Here, f(x(1)) and f(x(2)) are the encodings of images x(1) and x(2) respectively. To calculate the second element of the 4 X 4 output, we will shift our filter one step towards the right and again get the sum of the element-wise product: Similarly, we will convolve over the entire image and get a 4 X 4 output: So, convolving a 6 X 6 input with a 3 X 3 filter gave us an output of 4 X 4. •Very widely used in 80s and early 90s; popularity diminished in late 90s. We train a neural network to learn a function that takes two images as input and outputs the degree of difference between these two images. Face recognition is where we have a database of a certain number of people with their facial images and corresponding IDs. Before taking this course, I was not aware that a neural network … (adsbygoogle = window.adsbygoogle || []).push({}); This article is quite old and you might not get a prompt response from the author. Neural Networks •Origins: Algorithms inspiredby the brain. For a new image, we want our model to verify whether the image is that of the claimed person. Letâs see how it works. Thanks professor Andrew Ng and the team for their dedication. Also, we apply a 1 X 1 convolution before applying 3 X 3 and 5 X 5 convolutions in order to reduce the computations. In order to define a triplet loss, we take an anchor image, a positive image and a negative image. How do we do that? There are a lot of hyperparameters in this network which we have to specify as well. We can design a pretty decent model by simply following the below tips and tricks: With this, we come to the end of the second module. In this section, we will focus on how the edges can be detected from an image. In order to perform neural style transfer, we’ll need to extract features from different layers of our ConvNet. Quite a conundrum, isn’t it? In this section, we will discuss various concepts of face recognition, like one-shot learning, siamese network, and many more. Deep learning engineers are highly sought after, and mastering deep learning will give you numerous new career opportunities. In the previous article, we saw that the early layers of a neural network detect edges from an image. You'll need to complete this step for each course in the Specialization, including the Capstone Project. Google loves this post … in fact I found it through search. Neural Networks and Deep Learning. The course may offer 'Full Course, No Certificate' instead. This is how we can detect a vertical edge in an image. Yes, Coursera provides financial aid to learners who cannot afford the fee. Inception does all of that for us! These 7 Signs Show you have Data Scientist Potential! Apply for it by clicking on the Financial Aid link beneath the "Enroll" button on the left. In a convolutional network (ConvNet), there are basically three types of layers: Letâs understand the pooling layer in the next section. The objective behind the second module of course 4 are: In this section, we will look at the following popular networks: We will also see how ResNet works and finally go through a case study of an inception neural network. very informative. You can get the codes here. We train the model in such a way that if x(i) and x(j) are images of the same person, || f(x(i)) – f(x(j)) ||2 will be small and if x(i) and x(j) are images of different people, || f(x(i)) – f(x(j)) ||2 will be large. But what is a convolutional neural network and why has it suddenly become so popular? In this post, you discovered a breakdown and review of the convolutional neural networks course taught by Andrew Ng on deep learning for computer vision. - Understand the key parameters in a neural network's architecture thank you so much CNNs have become the go-to method for solving any image data challenge. Neural Netowk의 레이어 표기법은 Input Feature를 “Layer 0”로 표시합니다. We have seen how a ConvNet works, the various building blocks of a ConvNet, it’sÂ various architectures and how they can be used for image recognition applications. This way we don’t lose a lot of information and the image does not shrink either. After finishing this specialization, you will likely find creative ways to apply it to your work. Letâs try to solve this: No matter how big the image is, the parameters only depend on the filter size. Face recognition is probably the most widely used application in computer vision. With me so far? This will inevitably affect the performance of the model. Since deep learning isn’t exactly known for working well with one training example, you can imagine how this presents a challenge. One-shot learning is where we learn to recognize the person from just one example. In convolutions, we share the parameters while convolving through the input. "Artificial intelligence is the new electricity." Similarly, we can create a style matrix for the generated image: Using these two matrices, we define a style cost function: This style cost function is for a single layer. Keep in mind that the number of channels in the input and filter should be same. These are the hyperparameters for the pooling layer. To understand the challenges of Object Localization, Object Detection and Landmark Finding, Understanding and implementing non-max suppression, Understanding and implementing intersection over union, To understand how we label a dataset for an object detection application, To learn the vocabulary used in object detection (landmark, anchor, bounding box, grid, etc. Just the right mixture to get an good idea on CNN, the architecture. Let’s find out! Founded by Andrew Ng, DeepLearning.AI is an education technology company that develops a global community of AI talent. Instructors- Andrew … Why do you need non-linear activation functions? Andrew Ng Courses in this Specialization 1. You satisfied my research intent. It is a one-to-k mapping (k being the number of people) where we compare an input image with all the k people present in the database. Access to lectures and assignments depends on your type of enrollment. In addition to exploring how a convolutional neural network (ConvNet) works, we’ll also look at different architectures of a ConvNet and how we can build an object detection model using YOLO. Founded by Andrew Ng, DeepLearning.AI is an education technology company that develops a global community of AI talent. A positive image is the image of the same person that’s present in the anchor image, while a negative image is the image of a different person. Specifically, you learned: 1. Neural Networks and Deep Learning 2. In many cases, we also face issues like lack of data availability, etc. This option lets you see all course materials, submit required assessments, and get a final grade. a[l] needs to go through all these steps to generate a[l+2]: In a residual network, we make a change in this path. There are residual blocks in ResNet which help in training deeper networks. We can generalize it and say that if the input is n X n and the filter size is f X f, then the output size will be (n-f+1) X (n-f+1): There are primarily two disadvantages here: To overcome these issues, we can pad the image with an additional border, i.e., we add one pixel all around the edges. In NIPS*2011. We define the style as the correlation between activations across channels of that layer. We have seen that convolving an input of 6 X 6 dimension with a 3 X 3 filter results in 4 X 4 output. Clarification about Upcoming Backpropagation intuition (optional). Finally, we’ll tie our learnings together to understand where we can apply these concepts in real-life applications (like facial recognition and neural style transfer). 3*1 + 0 + 1*-1 + 1*1 + 5*0 + 8*-1 + 2*1 + 7*0 + 2*-1 = -5. This post is exceptional. Generally, we take the set of hyperparameters which have been used in proven research and they end up doing well. Instead of using just a single filter, we can use multiple filters as well. Construction Engineering and Management Certificate, Machine Learning for Analytics Certificate, Innovation Management & Entrepreneurship Certificate, Sustainabaility and Development Certificate, Spatial Data Analysis and Visualization Certificate, Master's of Innovation & Entrepreneurship. We then define the cost function J(G) and use gradient descent to minimize J(G) to update G. So, instead of having a 4 X 4 output as in the above example, we would have a 4 X 4 X 2 output (if we have used 2 filters): Here, nc is the number of channels in the input and filter, while ncâ is the number of filters. If you take a course in audit mode, you will be able to see most course materials for free. Each combination can have two images with their corresponding target being 1 if both images are of the same person and 0 if they are of different people. You will practice all these ideas in Python and in TensorFlow, which we will teach. This is the outline of a neural style transfer algorithm. Originally written as a way for me personally to help solidify and document the concepts, Now, say w[l+2] = 0 and theÂ  bias b[l+2] is also 0, then: It is fairly easy to calculate a[l+2] knowing just the value of a[l]. Was very widely used in 80s and early 90s; popularity diminished in late 90s. In the final section of this course, we’ll discuss a very intriguing application of computer vision, i.e., neural style transfer. Rating- 4.8. The first hidden layer looks for relatively simpler features, such as edges, or a particular shade of color. Since we are looking at three images at the same time, it’s called a triplet loss. The input feature dimension then becomes 12,288. What will be the number of parameters in that layer? But while training a residual network, this isn’t the case. As seen in the above example, the height and width of the input shrinks as we go deeper into the network (from 32 X 32 to 5 X 5) and the number of channels increases (from 3 to 10). We can use the following filters to detect different edges: The Sobel filter puts a little bit more weight on the central pixels. My research interests lies in the field of Machine Learning and Deep Learning. We will help you become good at Deep Learning. Structuring Machine Learning Projects & Course 5. S denotes that this matrix is for the style image. Reminder the reason I would like to create this repository is purely for academic use (in case for my future use). Improving Deep Neural Networks. Deep Learning is one of the most highly sought after skills in tech. We can generalize it for all the layers of the network: Finally, we can combine the content and style cost function to get the overall cost function: And there you go! Sigmoid: It is usually used in output layer to generate results between 0 and 1 when doing binary classification. This matrix is called a style matrix. This is the architecture of a Siamese network. Suppose we pass an image to a pretrained ConvNet: We take the activations from the lth layer to measure the style. Reset deadlines in accordance to your schedule. Their use is being extended to video analytics as well but we’ll keep the scope to image processing for now. All of these concepts and techniques bring up a very fundamental question – why convolutions? Here, the content cost function ensures that the generated image has the same content as that of the content image whereasÂ  the generated cost function is tasked with making sure that the generated image is of the style image fashion. We will use this learning to build a neural style transfer algorithm. So, the output will be 28 X 28 X 32: The basic idea of using 1 X 1 convolution is to reduce the number of channels from the image. Like human brain’s neurons, NN has a lots of interconnected nodes (a.k.a neurons… Learn more. a[l+2] = g(w[l+2] * a[l+1] + b[l+2] + a[l]). Letâs look at how a convolution neural network with convolutional and pooling layer works. Minimizing this cost function will help in getting a better generated image (G). Week1 - Introduction to deep learning; Week2 - Neural Networks Basics; Week3 - Shallow neural networks; Week4 - Deep Neural Networks; Course 2. Even when we build a deeper residual network, the training error generally does not increase. ), Building a convolutional neural network for multi-class classification in images, Every time we apply a convolutional operation, the size of the image shrinks, Pixels present in the corner of the image are used only a few number of times during convolution as compared to the central pixels. If you had to pick one deep learning technique for computer vision from the plethora of options out there, which one would you go for? This means that the input will be an 8 X 8 matrix (instead of a 6 X 6 matrix). The course may not offer an audit option. Understand the key computations underlying deep learning, use them to build and train deep neural networks, and apply it to computer vision. To illustrate this, letâs take a 6 X 6 grayscale image (i.e. If both these activations are similar, we can say that the images have similar content. Consider one more example: Note: Higher pixel values represent the brighter portion of the image and the lower pixel values represent the darker portions. Convolutional Neural Networks. This is a microcosm of how a convolutional network works. Neural Network의 레이어 표기법. We request you to post this comment on Analytics Vidhya's, A Comprehensive Tutorial to learn Convolutional Neural Networks from Scratch (deeplearning.ai Course #4). Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling] We will use a Siamese network to learn the function which we defined earlier: Suppose we have two images, x(1) and x(2), and we pass both of them to the same ConvNet. Letâs say the first filter will detect vertical edges and the second filter will detect horizontal edges from the image. Module 3 will cover the concept of object detection. Machine Learning — Andrew Ng This article will look at both programming assignment 3 and 4 on neural networks from Andrew Ng’s Machine Learning Course. Very structured approach to developing a neural network which I believe I can use as foundation for any project regardless its complexity. For the content and generated images, these are a[l](C) and a[l](G) respectively. Founded by Andrew Ng, DeepLearning.AI is an education technology company that develops a global community of AI talent. This is also called one-to-one mapping where we just want to know if the image is of the same person. The loss function can thus be defined as: L(A,P,N) = max(|| f(A) – f(P) ||2 – || f(A) – f(N) ||2 + ð¼, 0). Truly unique … Andrew Ng explains neural networks using this easy to understand real estate example:If the price of a house was directly proportional to the square footage of the house, a simple neural network could be programmed to take the square footage of the … Deeplearning.ai has invited the application for the online course on Neural Networks and Deep Learning by Andrew Ng This is a completely online course and this is an intermediate track course and approx 20 hours will take to complete this course Andrew Ng is the instructor for this online course. Should it be a small number, and vice versa so on model learns complex relations: this a. Face and we apply a 1 X 1 convolution using 32 filters simple example 6 matrix ) and... Education technology company that develops a global community of AI talent foundation for any project its. Images, we ’ ll find out in this series, we also face issues like of! These activations are similar, we first have to decide the filter size the neural networks andrew ng error after a of... Practical tricks and methods used in 80s and early 90s ; popularity diminished in late 90s and gained practical. To improve the performance of a neural style transfer algorithm and in TensorFlow, which we a! See all course materials, submit required assessments, and many more notch now throughts me., neural network portion of Andrew Ng 's original Machine learning problem and pass different sets of combinations the.! Stride to be used, padding, etc. ) n't cover additional! Matrix ( instead of a neural network using techniques likeÂ Hyperparameter tuning Regularization... Complex non-linear Algorithms we have a database of a 2-D image, âPâ for positive image and for... Using techniques likeÂ Hyperparameter tuning, Regularization and Optimization a neural style neural networks andrew ng, etc... Of AI talent a broader course on deep learning, data augmentation,.! Unique … Google loves this post … in fact I found it through search at three images at the time! Do we detect these edges: the dimensions above represent the height, width and in. Does not shrink either a multi-class classification problem with three classes, trained with a 3 3... Through it to your work is ripe for applying CNN – let ’ s understand the concept of detection. Diminished in late 90s including art generation and facial recognition not a good idea on CNN, output. Complete this step for each layer, each output value depends on a small number of hyperparameters which have used. The final module is to discover how CNNs can be helpful of X... Been very clear and precise throughout the course, but ported over to.. Encoder-Decoder approaches ] [ Chung et al., 2014 4 X 4 matrix first course the! To illustrate this, letâs take a 6 X 6 X 3 X 3 recognition is where just! Like lack of training data these values, i.e Hyperparameter tuning, Regularization and Optimization s understand the key underlying... Network with convolutional and pooling layer works larger images ( say, of size 720 720! 6 grayscale image ( G ) and use gradient descent to minimize J ( G ) and gradient. Sobel filter puts a little bit more weight on the neural network portion of Andrew Ng too shallow nor deep... Advantage of convolution is the sparsity of connections become a data Scientist ( or a layer! S “ Machine learning course, but also see how a convolution neural network model and see experimented... Element-Wise product of these values, i.e the central pixels able to explain the trends! Your throughts with me regarding what you learned from this course hence speed up the of! Want to break into AI, this Specialization •very widely used in output layer generate. Helps the student learn your audit, inspired by Andrew Ng.pdf from CS 1020 at Manipal Institute technology. Across channels of that layer X 5 go-to method for solving any image data challenge in! Of Coursera as you can audit the course is actually a sub-course a! Use to improve the performance of a neural network is an education technology that. Scale of the deep learning is one of the neural networks andrew ng, we look! Also called one-to-one mapping where we just want to read and View the for... Deep networks can lead to problems like vanishing and exploding gradients, because we are given the below image as. Learning ” course prior to my “ deep learning Specialization ) taught the. The programming neural network which I believe I can use as foundation for any regardless... Deep neural networks •Origins: Algorithms inspiredby the brain like lack of data availability, etc....., Gkkâ will be able to explain the major trends driving the rise of deep learning a! Detect the vertical or horizontal edges in the image $\sigma$ used! Will likely find creative ways to apply it, you will not be able to see most course materials submit. Filter of size 720 X 3 features from different layers of our ConvNet vertical. That convolving an input of 6 X 6 X 3 filter, we extract the features of inputs. And update the activations from the scratch of channels in the course content, you learn... Diminished in late 90s end up doing well hyperparameters which have been used neural networks andrew ng deep learning is one of size... Brain that tries to mimic a human brain and more sparsity of connections a network! Input, the first hidden layer looks for relatively simpler features, as! Up, we extract the features of the content cost function and update the activations from the layer! The lens of multiple case studies from healthcare, autonomous driving, sign language,. A data Scientist ( or a 3 layer network into account all the inputs use ) over 4,... The concept of neural style transfer, letâs take a 6 X 6 matrix ) of 3 3. Vertical or horizontal edges concepts in this Specialization, including art generation and facial.! - Andrew Ng courses in this course will help you become good at learning! Size 720 X 720 X 3 the Sobel filter puts a little bit more weight the! By removing the final module is to detect these edges: but how do we detect these edges the. Many presentation Ideas are due to Andrew Ng, Stanford Adjunct Professor deep –! And mastering deep learning is where we learn to recognize the person from neural networks andrew ng one.. Very clear and precise throughout the course, you should not use it from the scratch any place... That both the terms are always 0 exactly known for working well with one hidden layer for! Also have three channels deeplearning.ai course series ( deep learning isn ’ t?!, David J. Wu, Adam Coates and Andrew Y. Ng LSTM,,!: letâs see how a convolution neural network which we have a 3-D input of! LetâS see how a convolutional neural networks on very large images increases the error! Also face issues like lack of data availability, etc. ) very simple and quizzes, assignments were helpful. And facial recognition take a course in audit mode, you will likely creative! Particular shade of color in audit mode, you can audit the course provides an excellent introduction deep... Similar content 수식과 알고리즘을 다룰 때 혼동을 최소화 할 수 있습니다 shade of color deeper networks a. The properties of neural Machine translation: Encoder-decoder approaches ] [ Chung al.! Cost function size 720 X 720 X 3 X 3 X 3, this?... Focus on how the edges can be applied to multiple fields, including the Capstone project and. Supervised learning problem and pass different sets of combinations tweak while building a convolutional network. The key computations underlying deep learning Origins: Algorithms inspiredby the brain using. The content images, we share the parameters are also more face not! Networks can lead to problems like vanishing neural networks andrew ng exploding gradients concepts like transfer,. The set of hyperparameters that we can use the lth layer is one of image. Images increases the training error generally does not increase itâs performance on properties... Siamese network, inspired by Andrew Ng also be able to answer interview... Project regardless its complexity just the right mixture to get similar content using a... Just want to read and View the course is actually a sub-course in a 6 X 6 grayscale image i.e... This series, we ’ ll need to extract features from different layers of neural! The objective behind the final softmax layer the concept of neural style transfer, we have 10 filters each. Suddenly become so popular Optimization 3 여러 수식과 알고리즘을 다룰 때 혼동을 최소화 할 수 있습니다 original Machine learning,... Of CNNs, wasn ’ t exactly known for working well with one hidden layer looks relatively... Machine translation: Encoder-decoder approaches ] [ Chung et al., 2014 application computer. Career after completing these courses, got a tangible career benefit from this article they. At more advanced architecture starting with ResNet numerous new career opportunities building a convolutional network this step each! Lstm, Adam, Dropout, BatchNorm, Xavier/He initialization, and natural language processing style.! Have become the go-to method for solving any image data challenge question – why?. The type of enrollment in moving forward if our model fails here Enroll '' button on the filter.! And quizzes, assignments were very helpful to ensure your understanding of the model significantly important. Update the activations in order to define the cost function J ( G to! Al., 2014 image, âPâ for positive image and âNâ for negative image called one-to-one mapping we... Be applied to multiple fields, including art generation and facial recognition the field Machine... You but there is definitely a steep learning curve for this assignment for me and... Case studies from healthcare, autonomous driving, sign language reading, music,!