Abstract:

According to the World Health Organization, more than 2 billion people are still being affected by contaminated water. And while we think it’s safe in the United States, the Flint, Michigan water crisis has proven to us that even in a first world country like the US, we still face water safety issues. Currently all of the water sensors are chemical based, the most common one being chemical test strips that are one time use. Making monitoring contamination extremely difficult and exhausting, hence events like Flint, MI have happened in the past. The Smart Water AI is an IoT device that classifies and detects dangerous bacteria and harmful particles. The system can run continuously in real time. The cities can install IoT devices across different water sources and they will be able to monitor water quality as well as contamination continuously.

What is AI and how to use it:

Deep learning has been a pretty big trend for machine learning lately, and the recent success has paved the way to build project like this. We are going to focus specifically on computer vision and image classification in this sample. To do this, we will be building nevus, melanoma, and seborrheic keratosis image classifier using deep learning algorithm, the Convolution Neural Network (CNN) through Caffe Framework. In this work we will focus on Supervised learning, it requires training on the server as well as deploying on the edge. Our goal is to build a machine learning algorithm that can detect contamination images in real time, this way you can build your own AI based contamination classification device. Our application will include 2 parts, the first part is training, which we will be using different sets of cancer image database to train a machine learning algorithm (model) with their corresponding labels. The second part is deploying on the edge, which uses the same model we've trained and running it on an Edge device, in this case Movidius Neural Computing Stick.

How this works:

This is probably the most asked question in AI, and it's fairly simple once you learn how to do it. In order to understand this we first have to learn how machine learning image classification works. Machine learning requires feature extraction and model training. We first have to use domain knowledge to extract features that can be used for our ML algorithm model, some examples includes SIFT and HoG. After that we can use a dataset that has all the image features and labels to train our machine learning model. The major difference between traditional ML and Deep Learning is in the feature engineering. The traditional ML uses manually programmed features where Deep Learning does it automatically. Feature engineering is relatively difficult since it requires domain expertise and is very time consuming. Deep learning requires no feature engineering and can be more accurate.

Convolutional Neural Networks (CNNs):

Convolutional neural network is a class of deep, feed-forward artificial neural networks, most commonly applied to analyzing visual imagery because it is designed to emulate behavior of biological behaviors on animal visual cortex. It consist of convolutional layers and pooling layers so that the network can encode image properties.

The convolutional layer's parameters consists of a set of learnable filters (or kernels) that have a small receptive field. This way the image can convolve across. spatially, and computing dot products between the entries of the filter and the input and producing a 2-dimensional activation map of that filter. This way the network learns filters that can activate when it detects special features on the input image's spatial feature. Neurons of a convolutional layer (blue), connected to their receptive. Here are the steps of working:

Step 1: Water flowing
Step 2: Coding
Step 3: Setting up Azure Machine Learning for training
Step 4: Collecting data for AI
Step 5: Training AI Convolutional Neural Network (CNN)
Step 6: Install Movidius NCS SDK and deploying CNN Caffe Model on the Edge
Step 7: Install Helium SDK
Step 8: Connect Azure IoT Hub to Helium Dashboard
Step 9: Setup Azure SQL Database
Step 10: Create Azure Function App
Step 11: Display the Data
Step 12: AI to the rescue!!!

Potential impact:

People will definitely find it way too easier to test their water. This can also be extended worldwide.

Significance of our idea:

1. Uses AI, the next trend
2. Detects impurities in all types of water ( of all ppm)
3. The AI can be controlled with mobile apps also.
4. Easy controlling of AI
5. The device is portable.
6. The AI detects impurities and automatically purifies them with 98% accuracy by adopting filtering techniques.

How can it be produced:

This work can be extended over a wide range with proper datasheets and algorized model or prototype.

 

Inspiration

Nowadays, where ever we go, how much ever we are clean, the water around us is not clean and moreover the slum visit that I had last week made me feel pity about their conditions as they didn't have pure water to drink. This made us develop this idea.

Voting

Voting is closed!