Deep Learning is a new area of Machine Learning research, which has been introduced with the objective of moving Machine Learning closer to one of its original goals: Artificial Intelligence.
In addition to scalability, another often cited benefit of deep learning models is their ability to perform automatic feature extraction from raw data, also called feature learning.
Why ‘Deep Learning’ is called deep? It is because of the structure of ANNs. Earlier 40 years back, neural networks were only 2 layers deep as it was not computationally feasible to build larger networks. Now it is common to have neural networks with 10+ layers and even 100+ layer ANNs are being tried upon.
You can essentially stack layers of neurons on top of each other. The lowest layer takes the raw data like images, text, sound, etc. and then each neurons stores some information about the data they encounter. Each neuron in the layer sends information up to the next layers of neurons which learn a more abstract version of the data below it. So the higher you go up, the more abstract features you learn. You can see in the picture below has 5 layers in which 3 are hidden layers.
In deep learning, ANNs are automatically extracting features instead of manual extraction in feature engineering. Take example of image as input. Instead of us taking an image and hand compute features like distribution of colors, image histograms, distinct color count, etc., we just have to feed the raw images in ANN. ANNs have already proved their worth in handling images, but now they are being applied to all kinds of other datasets like raw text, numbers etc. This helps the data scientist to concentrate more on building deep learning algorithms.
DATA, duh?
Soon, feature engineering may turn obsolete but deep learning algorithm will require massive data for feeding into our models. Fortunately, we now have big data sources not available two decades back — facebook, twitter, Wikipedia, project Guttenberg etc.
An API is the interface through which you access someone else’s code or through which someone else’s code accesses yours. In effect the public methods and properties.
An example, You are buying an item in online through your credit card. You will provide credit card details and press continue button. It will tell you whether your information is correct or not. To provide these results, there are lot of things in the background.
The application will send your credit card details to a remote application which will validate your information and send the result back your application. API is used in this scenario.
I think hope it helps for the beginners who doesn’t understand really what API is.
Convolutional Neural Networks (ConvNets or CNNs) are a category of Neural Networks that have proven very effective in areas such as image recognition and classification. ConvNets have been successful in identifying faces, objects and traffic signs apart from powering vision in robots and self driving cars.
There are four main operations in the ConvNet :
Requirements:
Python Flask (- web framework of Python)Keras - Tensorflow as beckend numpy pillow
Basic Information of Flask
Next we create an instance of this class. The first argument is the name of the application’s module or package. If you…
flask.pocoo.org
Basic Example
from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return "Hello World!" @app.route('/') def hello_name(name): return "Hello {}!".format(name) if __name__ == '__main__': app.run()
route()
decorator to tell Flask what URL should trigger our function.we are going to make api with pre-trained VGG models with Flask (keras)
It usually refers to a deep convolutional network for object recognition developed and trained by Oxford’s renowned Visual Geometry Group (VGG), which achieved very good performance on the ImageNet dataset.
If you see the code it looks something like this :
model = Sequential()
model.add(ZeroPadding2D((1,1),input_shape=(3,224,224))) model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(Dropout(0.5))
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1000, activation='softmax'))
But Don’t worry You don’t need to code that much to use pre-trained VGG model with keras. And if you want to see the code check this out
Now let’s get to the final code shall we?
from keras.applications import VGG19 import PIL from keras.applications import imagenet_utils from keras.preprocessing.image import img_to_array import numpy as np import argparse,urllib from PIL import Image, ImageOps import requests from keras import backend as K K.set_image_dim_ordering('tf') import tensorflow as tf graph = tf.get_default_graph() # from werkzeug.utils import secure_filename import os from StringIO import StringIO from flask import Flask, request, redirect, url_for,make_response,jsonify app=Flask(__name__) inputShape = (224, 224) preprocess = imagenet_utils.preprocess_input # if we are using the InceptionV3 or Xception networks, then we # need to set the input shape to (299x299) [rather than (224x224)] # and use a different image processing function def load_img(path, grayscale=False, target_size=None): response = requests.get(path) img = Image.open(StringIO(response.content)).resize((224,224)) print img if grayscale: if img.mode != 'L': img = img.convert('L') else: if img.mode != 'RGB': img = img.convert('RGB') if target_size: wh_tuple = (target_size[1], target_size[0]) if img.size != wh_tuple: img = img.resize(wh_tuple) return img def predict(image): Network = VGG19 model = Network(weights="imagenet") # image1 = image.resize((224,224)) image1 = image image1 = img_to_array(image1) image1 = np.expand_dims(image1, axis=0) # pre-process the image using the appropriate function based on the # model that has been loaded (i.e., mean subtraction, scaling, etc.) image1 = preprocess(image1) # classify the image preds = model.predict(image1) P = imagenet_utils.decode_predictions(preds) for (i, (imagenetID, label, prob)) in enumerate(P[0]): print("{}. {}: {:.2f}%".format(i + 1, label, prob * 100)) (imagenetID, label, prob) = P[0][0] return label def read_image_from_url(url): response = requests.get(url, stream=True) img = Image.open(StringIO(response.content)) img=img.resize((224,224), PIL.Image.ANTIALIAS).convert('RGB') print img return img def read_image_from_ioreader(image_request): img = Image.open(BytesIO(image_request.read())).convert('RGB') return img @app.route('/api/v1/classify_image', methods=['POST']) def classify_image(): if 'image' in request.files: print("Image request") image_request = request.files['image'] img = read_image_from_url(image_request) elif 'url' in request.json: print("JSON request: ", request.json) image_url = request.json['url'] print image_url img = read_image_from_url(image_url) else: abort(BAD_REQUEST) resp = predict(img) return make_response(jsonify({'message': resp}), 200) if __name__ == '__main__': app.run(debug=True,port=5432)
The jsonify()
function in flask returns flask.Response()
object that already has the appropriate content-type header ‘application/json’ for use with json responses, whereas the json.dumps()
will just return an encoded string, which would require manually adding the mime type header.
Instead of jsonify , we also can use json.dumps().
Python Imaging Library (abbreviated as PIL) (in newer versions known as Pillow) is a free library for the Python programming language that adds support for opening, manipulating, and saving many different image file formats. An RGB image, sometimes referred to as a truecolor image, is stored as an m-by-n-by-3 data array that defines red, green, and blue color components for each individual pixel. RGB images do not use a palette.
Now How to work with python to post request?
flask_client.py
import requests,json headers = {'Content-type':'application/json'}imageurl = 'http://pngimg.com/uploads/orange/orange_PNG780.png' data = {'url':imageurl} res = requests.post('http://localhost:5432/api/v1/classify_image', data=json.dumps(data), headers=headers) print(res.text)
That’s it!! yeah I swear!!
Example of flask_client.py
If you guys having any trouble , please let me know!! (!!)
If you found this helpful, click the ? so more people will see it here on Medium.
How LSTM works? I think it’s unfair to say that neural network has no memory…
Generative AI refers to a category of advanced algorithms designed to produce original content across…
Generative AI Video Tools Everyone Should Know About Generative AI is revolutionizing video creation, making…
Large Language Models (LLMs) are a transformative advancement in artificial intelligence, capable of understanding, processing,…
In the ever-evolving landscape of retail, virtual clothing mirrors stand out as a key differentiator,…
As technology evolves, businesses in the retail and beauty sectors face increased pressure to innovate…