***During the COVID-19 pandemic, it is important to remember not to panic, but to be precautious and stay safe. Please be sure to wash your hands thoroughly and properly, and practice social distancing at this time.*** Stay safe we can fight this ❤️ With the global pandemic increasing in scale and spreading quickly, it is very important for us to stay precautious, safe, and healthy. There needs to be an efficient way to make diagnosis times easier for doctors, nurses and medical professionals so that they can focus on treatment immediately, and one certain way is to use neural networks and deep learning to find a solution to this problem for us. About Neural NetsA Neural Network, or a Neural "Net", is a deep learning model which sort of acts like a real biological neural network. It is a system which learns a specific pattern by taking in examples, without being explicitly programmed to achieve the task. To build or implement a neural network through code (for example, Python), there are 4 steps to essentially follow: 1. Set the architecture of the model , 2. Compiling the model, 3. Fitting the model, 4. Predicting with the model. Before we delve into the four step process, lets take a look at how a neural network looks like: A neural network is composed on neurons (or nodes) and layers.
Forward and Back PropagationThis process is called forward propagation. The neural network makes predictions using the process of forward propagation. It takes in the input feature values as the input layer, maps the target values at the output layer, and performs the computations, until it leads to some output target value. Now, what if the network makes a prediction which is not accurate? What if the prediction the network makes is far from the actual value? This is where we need to update the weights so that the values are more closer and accurate with the actual target values. In order to achieve this, the network goes backwards from the output layer to the previous hidden layer(s) (all the way up to the input layer) and updates the weights using an optimizer so that the next predictions are more accurate. An optimizer is essentially an algorithm which helps us minimize the error of the predicted vs the actual target value. Every time we train our network, our goal is to minimize the loss so that the neural network can accurately predict the next set of data. When we plot our losses for all possible weights, we have to ensure our loss is at its minimum value. In order to find the minimum value loss, we need to look for the minima of the function, or where the slope is 0. The optimizer updates the weights by using a learning rate, which we use to change the weight accordingly. Four Step ProcessNeural networks can be used to solve many many classification or regression problems such as Image Classification, character recognition, and natural language processing. But how do you implement a neural network to solve a problem? These are the 4 steps:
<
>
To build a neural network, we first set the whole architecture or the structure of the neural network. We first import the important libraries we need to build this network. The main library we require is the keras library, which is a neural network library in Python.
Importing important libraries...
In this step, you read in the input training dataset. To do this you use the read_csv function from the pandas library (another library in Python for datasets). We then use the Sequential model API in the keras library to build and instantiate the model. A sequential model is pretty self-explanatory; every layer only has connections to the layer coming after it. To add a layer to the model, you can use the .add(Dense()) to specify the layer. There is something called an activation function, which defines the value of a node given the weights and inputs. The relu activation function states that if the value is positive, it's just the value, else the value is 0. When we define our first layer, we also need to add our input shape, which represents how many columns or feature inputs the dataset has. The next step is to compile the model, where we specify the optimizer. The optimizer modifies the weights as throughout the training process, so that the predictions are closer to the actual target. To compile the model, you need to first specify your optimizer (there are many kinds, for example, Adam optimizer, and the SGD - Stochastic Gradient Descent optimizer) and the loss function (for example MSE - mean squared error). The loss function is a way to calculate the error between the predicted and actual values. For MSE, we basically square all of the errors of the predicted/actual values, and take the average of all of them.
Compiling model....
The third step is to fit the model with your dataset!! This is a very fun part: you basically fit the X and y dataset portions (which are the input/features and the targets respectively). This will fit the dataset into the model you have created.
Fitting the model
The last step is to predict values for your testing set. This is to test the model's accuracy with data is hasn't seen before. To do this, you would:
Predicting for testing set
Deep Learning and COVID-19Deep Learning algorithms and neural networks can help us detect infections from CT scans. This can improve the process of diagnosis, and we can also use machine learning to learn from all sorts of data: whether it be the geographically impact of COVID-19, or other external influences of the virus. Stay safe, healthy and wish y'all loads of happiness!!!! We got this! ❤️
0 Comments
***During the COVID-19 pandemic, it is important to remember not to panic, but to be precautious and stay safe. Please be sure to wash your hands thoroughly and properly, and practice social distancing at this time.*** Stay safe we can fight this ❤️ Whenever I think of a linear classifier or linear machine learning model the first thing that comes to my mind is the equation y = mx + b. This equation does wonders in sooooo many different fields and applications. It is essentially the foundation of how some ML models make predictions for testing data points. What's 'classification'?Classification is when we use some form of data's characteristics to determine which group the piece of data falls into. For example detecting whether a movie review is "good" or "bad", or grouping email as "spam" or "not spam". These situations all fall into "statistical classification", where the underlying problem is to identify which group a new piece of data belongs to, by using training data to learn the pattern. Linear ClassifiersLinear classifiers use classification on a linear function of inputs. A binary linear classifier uses classification on a linear function and identifies which of the two groups a new observation belongs to. Note: that binary classifiers only deal with two targets or 'groups'. So how would we collect the input data or 'image', (as shown in the previous example above)? When dealing with classification, our data consists of input dimensions or features. It also consists of target variables, which represents the 'end result'. In binary classification, the 'target' variable (or the end result) can be 1 of 2 values (0 or 1, true or false..etc - a binary valued target). An example can be to implement a medical diagnosis system, where you predict whether a certain patient might carry an infection. The input data consists of the patient's history, and symptoms, which are the features, and the target is whether they are carrying the infection or not. A classifier acts as a decision boundary, where one side corresponds to one input, and the other side corresponds to the other input. Inputs and Weights and BiasesA binary target value has two different possibilities, or "classes", which represents the end result (for ex: whether the email was spam, or not spam). In a machine learning model, the training dataset consists of the feature variables and the corresponding target values for those variables. This is for the data we already know about, or have with us. We are going to use this data to predict the target variable for other 'not seen' cases. Hope this makes sense because it does get kind of tricky! So how are the predictions for the unseen data computed from this training data? The model basically computes a linear function (like Y = MX + B) based on something called weights and biases. The model then checks whether the output of the function is greater than or less than a constant threshold, lets call it r. So this is essentially the raw model output: raw model output = coefficients • features + interceptSo the raw model output is basically the dot product of the 'coefficients' (which are the weights) and the feature variables, and an intercept (which is the bias). The weights essentially represent the importance of a particular feature variable in a model. For instance a particular symptom for an infection might be a very critical factor as to whether the patient is infected or not, so it has a greater weight associated with it. So in binary linear classification, if the model output is less than the threshold, it is equivalent to one class; if it is greater than the threshold the data point belongs to the other class. In this example, the threshold value is 0. Therefore, if the model output is a positive value, it is predicted as one class, and if the output is negative, the model predicts it as the other class. The linear classifier is the decision boundary (which is the line). Along the line, the outputs are 0. If the intercept changes, the line's orientation also changes (so does the data value points). If the weights or coefficients of the linear function change, the line's slope value and shape change as well. There are lots lots and lots of applications that are correlated with linear classification. And there are also other models which use multi class linear classification, instead of with just 2 classes.
|
Archives
December 2020
Topics
All
|