Posts

Visual Question Answering with Keras – Part 1

This is Part I of II of the Article Series Visual Question Answering with Keras

Making Computers Intelligent to answer from images

If we look closer in the history of Artificial Intelligence (AI), the Deep Learning has gained more popularity in the recent years and has achieved the human-level performance in the tasks such as Speech Recognition, Image Classification, Object Detection, Machine Translation and so on. However, as humans, not only we but also a five-year child can normally perform these tasks without much inconvenience. But the development of such systems with these capabilities has always considered an ambitious goal for the researchers as well as for developers.

In this series of blog posts, I will cover an introduction to something called VQA (Visual Question Answering), its available datasets, the Neural Network approach for VQA and its implementation in Keras and the applications of this challenging problem in real life. 

Table of Contents:

1 Introduction

2 What is exactly Visual Question Answering?

3 Prerequisites

4 Datasets available for VQA

4.1 DAQUAR Dataset

4.2 CLEVR Dataset

4.3 FigureQA Dataset

4.4 VQA Dataset

5 Real-life applications of VQA

6 Conclusion

 

  1. Introduction:

Let’s say you are given a below picture along with one question. Can you answer it?

I expect confidently you all say it is the Kitchen without much inconvenience which is also the right answer. Even a five-year child who just started to learn things might answer this question correctly.

Alright, but can you write a computer program for such type of task that takes image and question about the image as an input and gives us answer as output?

Before the development of the Deep Neural Network, this problem was considered as one of the difficult, inconceivable and challenging problem for the AI researcher’s community. However, due to the recent advancement of Deep Learning the systems are capable of answering these questions with the promising result if we have a required dataset.

Now I hope you have got at least some intuition of a problem that we are going to discuss in this series of blog posts. Let’s try to formalize the problem in the below section.

  1. What is exactly Visual Question Answering?:

We can define, “Visual Question Answering(VQA) is a system that takes an image and natural language question about the image as an input and generates natural language answer as an output.”

VQA is a research area that requires an understanding of vision(Computer Vision)  as well as text(NLP). The main beauty of VQA is that the reasoning part is performed in the context of the image. So if we have an image with the corresponding question then the system must able to understand the image well in order to generate an appropriate answer. For example, if the question is the number of persons then the system must able to detect faces of the persons. To answer the color of the horse the system need to detect the objects in the image. Many of these common problems such as face detection, object detection, binary object classification(yes or no), etc. have been solved in the field of Computer Vision with good results.

To summarize a good VQA system must be able to address the typical problems of CV as well as NLP.

To get a better feel of VQA you can try online VQA demo by CloudCV. You just go to this link and try uploading the picture you want and ask the related question to the picture, the system will generate the answer to it.

 

  1. Prerequisites:

In the next post, I will walk you through the code for this problem using Keras. So I assume that you are familiar with:

  1. Fundamental concepts of Machine Learning
  2. Multi-Layered Perceptron
  3. Convolutional Neural Network
  4. Recurrent Neural Network (especially LSTM)
  5. Gradient Descent and Backpropagation
  6. Transfer Learning
  7. Hyperparameter Optimization
  8. Python and Keras syntax
  1. Datasets available for VQA:

As you know problems related to the CV or NLP the availability of the dataset is the key to solve the problem. The complex problems like VQA, the dataset must cover all possibilities of questions answers in real-world scenarios. In this section, I will cover some of the datasets available for VQA.

4.1 DAQUAR Dataset:

The DAQUAR dataset is the first dataset for VQA that contains only indoor scenes. It shows the accuracy of 50.2% on the human baseline. It contains images from the NYU_Depth dataset.

Example of DAQUAR dataset

Example of DAQUAR dataset

The main disadvantage of DAQUAR is the size of the dataset is very small to capture all possible indoor scenes.

4.2 CLEVR Dataset:

The CLEVR Dataset from Stanford contains the questions about the object of a different type, colors, shapes, sizes, and material.

It has

  • A training set of 70,000 images and 699,989 questions
  • A validation set of 15,000 images and 149,991 questions
  • A test set of 15,000 images and 14,988 questions

Image Source: https://cs.stanford.edu/people/jcjohns/clevr/?source=post_page

 

4.3 FigureQA Dataset:

FigureQA Dataset contains questions about the bar graphs, line plots, and pie charts. It has 1,327,368 questions for 100,000 images in the training set.

4.4 VQA Dataset:

As comapred to all datasets that we have seen so far VQA dataset is relatively larger. The VQA dataset contains open ended as well as multiple choice questions. VQA v2 dataset contains:

  • 82,783 training images from COCO (common objects in context) dataset
  • 40, 504 validation images and 81,434 validation images
  • 443,757 question-answer pairs for training images
  • 214,354 question-answer pairs for validation images.

As you might expect this dataset is very huge and contains 12.6 GB of training images only. I have used this dataset in the next post but a very small subset of it.

This dataset also contains abstract cartoon images. Each image has 3 questions and each question has 10 multiple choice answers.

  1. Real-life applications of VQA:

There are many applications of VQA. One of the famous applications is to help visually impaired people and blind peoples. In 2016, Microsoft has released the “Seeing AI” app for visually impaired people to describe the surrounding environment around them. You can watch this video for the prototype of the Seeing AI app.

Another application could be on social media or e-commerce sites. VQA can be also used for educational purposes.

  1. Conclusion:

I hope this explanation will give you a good idea of Visual Question Answering. In the next blog post, I will walk you through the code in Keras.

If you like my explanations, do provide some feedback, comments, etc. and stay tuned for the next post.

How To Remotely Send R and Python Execution to SQL Server from Jupyter Notebooks

Introduction

Did you know that you can execute R and Python code remotely in SQL Server from Jupyter Notebooks or any IDE? Machine Learning Services in SQL Server eliminates the need to move data around. Instead of transferring large and sensitive data over the network or losing accuracy on ML training with sample csv files, you can have your R/Python code execute within your database. You can work in Jupyter Notebooks, RStudio, PyCharm, VSCode, Visual Studio, wherever you want, and then send function execution to SQL Server bringing intelligence to where your data lives.

This tutorial will show you an example of how you can send your python code from Juptyter notebooks to execute within SQL Server. The same principles apply to R and any other IDE as well. If you prefer to learn through videos, this tutorial is also published on YouTube here:


 

Environment Setup Prerequisites

  1. Install ML Services on SQL Server

In order for R or Python to execute within SQL, you first need the Machine Learning Services feature installed and configured. See this how-to guide.

  1. Install RevoscalePy via Microsoft’s Python Client

In order to send Python execution to SQL from Jupyter Notebooks, you need to use Microsoft’s RevoscalePy package. To get RevoscalePy, download and install Microsoft’s ML Services Python Client. Documentation Page or Direct Download Link (for Windows).

After downloading, open powershell as an administrator and navigate to the download folder. Start the installation with this command (feel free to customize the install folder): .\Install-PyForMLS.ps1 -InstallFolder “C:\Program Files\MicrosoftPythonClient”

Be patient while the installation can take a little while. Once installed navigate to the new path you installed in. Let’s make an empty folder and open Jupyter Notebooks: mkdir JupyterNotebooks; cd JupyterNotebooks; ..\Scripts\jupyter-notebook

Create a new notebook with the Python 3 interpreter:

 

To test if everything is setup, import revoscalepy in the first cell and execute. If there are no error messages you are ready to move forward.

Database Setup (Required for this tutorial only)

For the rest of the tutorial you can clone this Jupyter Notebook from Github if you don’t want to copy paste all of the code. This database setup is a one time step to ensure you have the same data as this tutorial. You don’t need to perform any of these setup steps to use your own data.

  1. Create a database

Modify the connection string for your server and use pyodbc to create a new database.

import pyodbc  
# creating a new db to load Iris sample in 
new_db_name = "MLRemoteExec" connection_string = "Driver=SQL Server;Server=localhost\MSSQLSERVER2017;Database={0};Trusted_Connection=Yes;" 

cnxn = pyodbc.connect(connection_string.format("master"), autocommit=True) 

cnxn.cursor().execute("IF EXISTS(SELECT * FROM sys.databases WHERE [name] = '{0}') DROP DATABASE {0}".format(new_db_name)) 

cnxn.cursor().execute("CREATE DATABASE " + new_db_name)

cnxn.close()

print("Database created") 
  1. Import Iris sample from SkLearn

Iris is a popular dataset for beginner data science tutorials. It is included by default in sklearn package.

from sklearn import datasetsimport pandas as pd
# SkLearn has the Iris sample dataset built in to the packageiris = datasets.load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
  1. Use RecoscalePy APIs to create a table and load the Iris data

(You can also do this with pyodbc, sqlalchemy or other packages)

from revoscalepy import RxSqlServerData, rx_data_step
# Example of using RX APIs to load data into SQL table. You can also do this with pyodbc
table_ref = RxSqlServerData(connection_string=connection_string.format(new_db_name), table="Iris")rx_data_step(input_data = df, output_file = table_ref, overwrite = True)print("New Table Created: Iris")
print("Sklearn Iris sample loaded into Iris table")

Define a Function to Send to SQL Server

Write any python code you want to execute in SQL. In this example we are creating a scatter matrix on the iris dataset and only returning the bytestream of the .png back to Jupyter Notebooks to render on our client.

def send_this_func_to_sql():
    from revoscalepy import RxSqlServerData, rx_import
    from pandas.tools.plotting import scatter_matrix
    import matplotlib.pyplot as plt
    import io    
# remember the scope of the variables in this func are within our SQL Server Python Runtime
    connection_string = "Driver=SQL Server;Server=localhost\MSSQLSERVER2017; Database=MLRemoteExec;Trusted_Connection=Yes;"

# specify a query and load into pandas dataframe df
    sql_query = RxSqlServerData(connection_string=connection_string, sql_query = "select * from Iris")

    df = rx_import(sql_query)
    scatter_matrix(df)

# return bytestream of image created by scatter_matrix
    buf = io.BytesIO()
    plt.savefig(buf, format="png")
    buf.seek(0)
    return buf.getvalue()

Send execution to SQL

Now that we are finally set up, check out how easy sending remote execution really is! First, import revoscalepy. Create a sql_compute_context, and then send the execution of any function seamlessly to SQL Server with RxExec. No raw data had to be transferred from SQL to the Jupyter Notebook. All computation happened within the database and only the image file was returned to be displayed.

from IPython import display
import matplotlib.pyplot as plt 
from revoscalepy import RxInSqlServer, rx_exec# create a remote compute context with connection to SQL Server

sql_compute_context = RxInSqlServer(connection_string=connection_string.format(new_db_name))

# use rx_exec to send the function execution to SQL Server

image = rx_exec(send_this_func_to_sql, compute_context=sql_compute_context)[0]

# only an image was returned to my jupyter client. All data remained secure and was manipulated in my db.

display.Image(data=image)

While this example is trivial with the Iris dataset, imagine the additional scale, performance, and security capabilities that you now unlocked. You can use any of the latest open source R/Python packages to build Deep Learning and AI applications on large amounts of data in SQL Server. We also offer leading edge, high-performance algorithms in Microsoft’s RevoScaleR and RevoScalePy APIs. Using these with the latest innovations in the open source world allows you to bring unparalleled selection, performance, and scale to your applications.

Learn More

Check out SQL Machine Learning Services Documentation to learn how you can easily deploy your R/Python code with SQL stored procedures making them accessible in your ETL processes or to any application. Train and store machine learning models in your database bringing intelligence to where your data lives.

Other YouTube Tutorials:

Interview – The Importance of Machine Learning for the Data Driven Business

To become more data-driven, organizations must mature their analytics and automate more of their decision making processes for innovation and differentiation. Data science seems like the right approach, yet is a new and fast moving field that seems to have as many dead ends as it has high ways to value. Cloudera Fast Forward Labs, led by Hilary Mason, shows companies the way.

Alice Albrecht is a research engineer at Cloudera Fast Forward Labs.  She spends her days researching the latest and greatest in machine learning and artificial intelligence and bringing that knowledge to working prototypes and delivering concrete advice for clients.  Prior to joining Fast Forward Labs, Alice worked in both finance and technology companies as a practicing data scientist, data science leader, and – most recently – a data product manager.  In addition to teaching machines to do cool things, Alice is passionate about mentoring and helping others grow in their careers.  Alice holds a PhD from Yale in cognitive neuroscience where she studied how humans summarize sensory information from the world around them and the neural substrates that underlie those summaries.

Read this article in German:
“Interview – Die Bedeutung von Machine Learning für das Data Driven Business“

Data Science Blog: Ms. Albrecht, you are a well-known keynote speaker for data science and artificial intelligence. While data science has arrived business already, deep learning seems to be the new trend. Is artificial intelligence for business already normal business or is it an overrated hype?

I’d say it isn’t either of those two options.  Data science is now widely adopted but companies still struggle to integrate this new discipline into their existing businesses.  As for deep learning, it really depends on the company that’s looking into using this technique.  I wouldn’t say that deep learning is by any means part of business as usual- nor should it be.  It’s a tool like any other and building a capacity for using a tool without clearly defined business needs is a recipe for disaster.

Data Science Blog: Just to make sure what we are talking about: What are the differences and overlaps between data analytics, data science, machine learning, deep learning and artificial intelligence?

Here at Cloudera Fast Forward Labs, we like to think of data analytics as collecting data and counting things (mostly for quick charts and reports).  Data science solves business problems by counting cleverly and predicting things with the data that’s collected.  Machine learning is about solving problems with new kinds of feedback loops that improve with more data.  Deep learning is a particular type of machine learning and is not itself a separate concept or type of tool.  Artificial intelligence taps into something more complicated than what we’re seeing today – it’s much broader than training machines to repetitively do very specialized tasks or solve very narrow problems.

Data Science Blog: And how can we add the context to big data?

From a theoretical perspective, data science has been around for decades. The building blocks for modern day machine learning, deep learning and artificial intelligence are based on mathematical theorems  that go back to the 1940’s and 1950’s. The challenge was that at the time, compute power and data storage capacity were simply too expensive for the approaches to be implemented. Today that’s all changed.. Not only has the cost of data storage dropped considerably, open source technology like Apache Hadoop has made it possible to store any volume of data at costs approaching zero. Compute power, even highly specialised chip architectures, are now also available on demand and only for the time organisations need them through public and private cloud solutions. The decreased cost of both data storage and compute power, together with a growing list of tools and resources readily available via the open source community allows companies of any size to benefit from data (no matter that size of that data).

Data Science Blog: What are the challenges for organizations in getting started with data science?

I see two big challenges when getting started with data science.  One is ensuring that you have organizational alignment around exactly what type of work data scientists will deliver (and timing for those projects).  The second hurdle is around ensuring that you have the right data in place before you start hiring data scientists. This can be tricky if you don’t have in-house expertise in this area, so sometimes it’s better to hire a data engineer or a data strategist (or director of data science) before you ever get started building out a data science team.

Data Science Blog: There are many discussions about how to build a data-driven business. Is it just about using data science to get a better understanding of customer behavior?

No, being data driven doesn’t just mean better understanding your customers (though that is one way that data science can help in an organization).  Aside from building an organization that relies on data and analytics to help them make decisions (about customer behavior or otherwise), being a data-driven business means that data is powering your core products.

Data Science Blog: The number of technologies, tools and frameworks is increasing. For organizations this also means increasing complexity. Do companies need to stay always up-to-date or could it be an advice to wait and imitate pioneers later?

While it’s not critical (or advisable) for organizations to adopt every new advancement that comes along, it is critical for them to stay abreast of emerging frameworks.  If a business waits to see what others are doing, and therefore don’t invest in understanding how new advancements can affect their particular business, they’ve likely already missed the boat.

Data Science Blog: Global players have big budgets just for doing research and setting up data labs. Middle-sized companies need to see the break even point soon. How can we accelerate the value generation of data science?

Having a team that is highly focused on a specific set of projects that are well-scoped and aligned to the business makes all the difference.  Data science and machine learning don’t have to sacrifice doing research and being innovative in order to produce value.  The biggest difference is that smaller teams will have to be more aware of how their choice of project fits into emerging frameworks and their particular acute and near term business needs.

Data Science Blog: How does Cloudera Fast Forward Labs help other organizations to accelerate their start with machine learning?

We advise organizations, based on their particular needs, on what the latest advancements are in machine learning and data science, how to build and structure their data teams to develop the capabilities they need to meet their goals, and how to quickly implement custom forward-looking solutions using their own data and in-house expertise.

Data Science Blog: Finally, a question for our younger readers who are looking for a career as a data expert: What makes a good data scientist? Do you like to work with introverted coding nerds or the data loving business experts?

A good data scientists should be deeply curious and have a love for the ways in which data can lead to new discoveries and power the next generation of products.  We expect the people who thrive in this field to come from a variety of backgrounds and experiences.

Deep Learning and Human Intelligence – Part 1 of 2

Many people are under the impression that the new wave of data science, machine learning and/or digitalization is new, that it did not exist before. But its history is as long as the history of humanity and/or science itself.  The scientific discovery could hardly take place without the necessary data. Even the process of discovering the numbers included elements of machine learning: pattern recognition, comparison between different groups (ranking), clustering, etc. So what differentiates mathematical formulas from machine learning and how does it relate to artificial intelligence?

There is no difference between the two if seen from the perspective of formulas however, such a perspective limits the type of data to which they can be applied. Data stored via tables consist of structured data and are stored in so-called relational databases. The reason for such a data storage is the connection between different fields that assume a well-established structure in advance, such as a company’s sales or balance sheet. However, with the emergence of personal computers, many of the daily activities have been digitalized: music, pictures, movies, and so on. All this information is stored unrelated to other data and therefore called unstructured data.

IEEE International Conference on Computer Vision (ICCV), 2015, DOI: 10.1109/ICCV.2015.428

Copyright: IEEE International Conference on Computer Vision (ICCV), 2015, DOI: 10.1109/ICCV.2015.428

The essence of scientific discoveries was and will be structure. Not surprisingly, the mathematical formulas revolve around relations between variables – information, in general. For example, Galileo derived the law of falling balls from measuring the successive hight of a falling ball. The main difficulty was to obtain measurements at regular time intervals. What about if the data is not structured, which mathematical formula should be applied then? There is a distribution of people’s height, but no distribution for the pictures taken in all holidays for the last year, there is an amplitude for acoustic signals, but no function that detects the similarity between two songs. This is one of the reasons why machine learning focuses heavily on clustering and classification.

Roughly speaking, these simple examples are enough to categorize the difference between scientific discovery and machine learning. Science is about discovering relationships between different variables, Machine Learning tries to automatize processes. Every technical improvement is part of the automation, so why is everything different in this case? Because the current automation deals with human intelligence. The car automates the walking, the kitchen stove the fire, but Machine Learning parts of the human intelligence. There is a difference between the previous automation steps and those of human intelligence. All the previous ones are either outside the human body – such as Fire – or unconsciously executed (once learned) – walking, spinning, etc. The automation induced by Machine Learning affects a part of the human intelligence that we consciously perceive. Of course, today’s machine learning tools are unable to automate all human intelligence, but it is a fascinating step in that direction.

A breakthrough in Machine Learning tasks was achieved in 2012 when the first Deep Learning algorithm for detecting types of images, reached near-human accuracy. It could appreciate the likelihood that the image is a human face, a train, a ball or a fish without having “seen” the picture before. Such an algorithm can be used in various areas:  personally – facial recognition in pictures and/or social media – as tagging of images or videos, medicine – cancer detection, etc. For understanding such cutting-edge issues of classification, one cannot avoid understanding how Deep Learning works. To see the beauty of such algorithms and, at the same time, to be able to comprehend the difficulty of working with them, an example will be the best guide.

The building blocks of Deep Learning are neurons, operational units, which perform mathematical operations or logical operations like AND, OR, etc., and are modelled after the neurons in the brain. Already in the 1950’s two neuroscientist, Hubel and Wiesel, observed that not all neurons in the brain are responding in the same fashion to visual stimuli. Some responded only to horizontal lines, whereas others to vertical lines, with other words, the brain is constructed with specialized neurons. Groups of such neurons are called, in the Machine Learning community, layers. Like in the brain, neurons with different properties are clustered in different layers. This implies that layers have also specific properties and have to be arranged in a specific way, called architecture. It is this architecture which differentiates Deep Learning from Artificial Neuronal Networks (ANN are similar to a layer).

Unfortunately, scientists still haven’t figured out how the brain works, thus to discover how to train Deep Learning from data was not an easy task, and is also the reason why another example is used to explain the training of Deep Learning: the eye. One has always to remember: once it is known how Deep Learning works, it is simple to find example which illustrates the working mechanism.  For such an analogy, it is sufficient for someone without any knowledge about Deep Learning, to keep in mind only the elements that compose such architectures: input data, different layers of neurons, output layers, ReLu’s.

Input data are any type of information, in our example it is light. Of course, that Deep Learning is not limited only to images or videos, but also to sound and/or time series, which would imply that the example would be the ear and sound waves, or the brain and numbers.

Layers can be seen as cells in the eye. It is well known that the eye is formed of different layers connected to each other with each of them having different properties, functionalities. The same is true also for the layers of a Deep Learning architecture: one can see the neurons as cells of the layer as the tissue. While, mathematically, the neurons are nothing more than simple operations, usually linear weight functions, they can be seen as the properties of individual cells. Each layer has one weight matrix, which gives the neuron (and layer) specific properties depending on the data and the task at hand.

It is here that the architecture becomes very important. What Deep Learning offers is a default setting of the layers with unknown weights. One can see this as trying to build an eye knowing that there are different types of cells and different ways how tissues of such cells can be arranged, but not which cell exactly is needed (with what properties) and which arrangement of layers works best. Such an approach has the advantage that one is capable of building any type of organ desired, but the disadvantage is also very obvious: it is time consuming to find the appropriate cell properties and layers arrangements.

Still, the strategy of Deep Learning is a significant departure from the Machine Learning approaches. The performance of Machine Learning methods is as good as the features engineering performed by Data Scientists, and thus depending on the creativity of the Data Scientist. In the case of Deep Learning the engineers of the features is performed automatically as part of the model building. This is a huge improvement, as the only difficult task is to have enough data and computer power to find the right weights matrices. Such an endeavor was performed also by nature for the eye — and is also the reason why one can choose it as an example for Deep Learning — evolution. It is not surprising that Deep Learning is one of the best direction scientists have of Artificial Intelligence today.

The evolution of the eye can be seen, from the perspective of Data Scientists, as the continuous training of a Deep Learning architecture which enables to recognize and track one or more objects. The performance of the evolutional process can be summed up as the fine tuning of the cells which are getting more and more susceptible to light and the adaptation of layers to enable a better vision. Different animals in different environments and different targets — as the hawk and the fly — developed different eyes than humans, but they all work according to the same principle. The tasks that Deep Learning is performing today are similar, for example it can be used to drive cars but there is still a difference:  there is no connection to other organs. Deep Learning is not the approximation of an Artificial Organism, like an android, but a simplified Artificial Organ that can work on its own.

Returning to the working mechanism of the Deep Learning architecture, we can already follow the analogy of what happens if a ray of light is hitting the eye. Once the eye is fully adapted to the task, one can followed how the information enters the Deep Learning architecture (Artificial Eye) by penetrating the input layer. already here arises the question, what kind of eye is the best? One where a small source of light can reach as many neurons as possible, or the one where the light sources reaches only few neurons? In order to take such a decision, a last piece of the puzzle is required: ReLu. One can see them as synapses between neurons (cells) and/or similarly for tissue. By using continuous functions, such as the shape of the latter ‘S’ (called sigmoid), the information from one neuron will be distributed over a large number of other neurons. If one uses the maximum function, then only few neurons are updated with processed information from earlier layers.

Such sparse structures between neurons, was a major improvement in the development of the technique of training Deep Learning architectures. Again, it has a strong evolutionary analogy: energy efficiency. By needing less neurons, the tissues and architecture are both kept to a minimal size which enables flexibility in development and less energy. As the information is process by the different layers, the Artificial Eye is gathering more and more complex (non-linear) structures — the adapted features –, which help to decide, from past experience, what kind of object is detected.

This was part 1 of 2 of the article series. Continue with Part 2.

The 6 most in-demand AI jobs and how to get them

A press release issued in December 2017 by Gartner, Inc explicitly states, 2020 will be a pivotal year in Artificial Intelligence-related employment dynamics. It states AI will become “a positive job motivator”.

However, the Gartner report also sounds some alarm bells. “The number of jobs affected by AI will vary by industry-through 2019, healthcare, the public sector and education will see continuously growing job demand while manufacturing will be hit the hardest. Starting in 2020, AI-related job creation will cross into positive territory, reaching two million net-new jobs in 2025,” the press release adds.

This phenomenon is expected to strike worldwide, as a report carried by a leading Indian financial daily, The Hindu BusinessLine states. “The year 2018 will see a sharp increase in demand for professionals with skills in emerging technologies such as Artificial Intelligence (AI) and machine learning, even as people with capabilities in Big Data and Analytics will continue to be the most sought after by companies across sectors, say sources in the recruitment industry,” this news article says.

Before we proceed, let us understand what exactly does Artificial Intelligence or AI mean.

Understanding Artificial Intelligence

Encyclopedia Britannica explains AI as: “The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with human beings.” Classic examples of AI are computer games that can be played solo on a computer. Of these, one can be a human while the other is the reasoning, analytical and other intellectual property a computer. Chess is one example of such a game. While playing Chess with a computer, AI will analyze your moves. It will predict and reason why you made them and respond accordingly.

Similarly, AI imitates functions of the human brain to a very great extent. Of course, AI can never match the prowess of humans but it can come fairly close.

What this means?

This means that AI technology will advance exponentially. The main objective for developing AI will not aim at reducing dependence on humans that can result in loss of jobs or mass retrenchment of employees. Having a large population of unemployed people is harmful to economy of any country. Secondly, people without money will not be able to utilize most functions that are performed through AI, which will render the technology useless.

The advent and growing popularity of AI can be summarized in words of Bill Gates. According to the founder of Microsoft, AI will have a positive impact on people’s lives. In an interview with Fox Business, he said, people would have more spare time that would eventually lead to happier life. However he cautions, it would be long before AI starts making any significant impact on our daily activities and jobs.

Career in AI

Since AI primarily aims at making human life better, several companies are testing the technology. Global online retailer Amazon is one amongst these. Banks and financial institutions, service providers and several other industries are expected to jump on the AI bandwagon in 2018 and coming years. Hence, this is the right time to aim for a career in AI. Currently, there exists a great demand for AI professionals. Here, we look at the top six employment opportunities in Artificial Intelligence.

Computer Vision Research Engineer

 A Computer Vision Research Engineer’s work includes research and analysis, developing software and tools, and computer vision technologies. The primary role of this job is to ensure customer experience that equals human interaction.

Business Intelligence Engineer

As the job designation implies, the role of a Business Intelligence Engineer is to gather data from multiple functions performed by AI such as marketing and collecting payments. It also involves studying consumer patterns and bridging gaps that AI leaves.

Data Scientist

A posting for Data Scientist on recruitment website Indeed describes Data Scientist in these words: “ A mixture between a statistician, scientist, machine learning expert and engineer: someone who has the passion for building and improving Internet-scale products informed by data. The ideal candidate understands human behavior and knows what to look for in the data.

Research and Development Engineer (AI)

Research & Development Engineers are needed to find ways and means to improve functions performed through Artificial Intelligence. They research voice and text chat conversations conducted by bots or robotic intelligence with real-life persons to ensure there are no glitches. They also develop better solutions to eliminate the gap between human and AI interactions.

Machine Learning Specialist

The job of a Machine Learning Specialist is rather complex. They are required to study patterns such as the large-scale use of data, uploads, common words used in any language and how it can be incorporated into AI functions as well as analyzing and improving existing techniques.

Researchers

Researchers in AI is perhaps the best-paid lot. They are required to research into various aspects of AI in any organization. Their role involves researching usage patterns, AI responses, data analysis, data mining and research, linguistic differences based on demographics and almost every human function that AI is expected to perform.

As with any other field, there are several other designations available in AI. However, these will depend upon your geographic location. The best way to find the demand for any AI job is to look for good recruitment or job posting sites, especially those specific to your region.

In conclusion

Since AI is a technology that is gathering momentum, it will be some years before there is a flood of people who can be hired as fresher or expert in this field. Consequently, the demand for AI professionals is rather high. Median salaries these jobs mentioned above range between US$ 100,000 to US$ 150,000 per year.

However, before leaping into AI, it is advisable to find out what other qualifications are required by employers. As with any job, some companies need AI experts that hold specific engineering degrees combined with additional qualifications in IT and a certificate that states you hold the required AI training. Despite, this is the best time to make a career in the AI sector.

Ways AI & ML Are Changing How We Live

From Amazon’s Alexa, a personal assistant that can do anything from making your to-do list to giving a wide range of real-time information about the world around you, to Google’s DeepMind that has very recently made headlines for possibly being able to predict the future, AI and ML are the biggest development in human history.

Machine Learning Used by Hospitals

We hear a lot about Artificial Intelligence (AI) in the realm of insurance Big Data, but there isn’t much buzz around how AI and ML are revolutionising hospitals. The national health expenditures were around $3.4 trillion and estimated to increase from 17.8 percent of GDP to 19.9 percent between 2015 and 2025. By 2021, industry analysts have predicted that the AI health market will reach $6.6 billion. By 2026, such increases in AI technology in the healthcare sector will save the economy around $150 billion annually.

Some of the most popular Artificial Intelligence applications used in hospitals now are:

  • Predictive Health Trackers – Technology that has the ability to monitor patients’ health status using real-time data collection. One such technology is the Health and Environmental Tracker (HET) which can predict if someone is about to have an asthma attack.
  • Chatbots – It isn’t only retail customer service that uses chatbots to deal with consumers. Now hospitals have automated physicians that inquire and route clinicians to the right specialists.
  • Predictive AnalyticsCleveland Clinics have partnered with Microsoft (Cortana) while John Hopkins has partnered up with GE in order to create Machine Learning technology that has the ability to monitor patients and prevent patient emergencies before they happen. It does this by analysing data for primary indicators of potential risks.

Cognitive Marketing – Content Marketing on Steroids

Customer experience and content marketing are terms often tossed around in the world of business and advertising these days. Why do we bring them up now, you ask? Well, things are about to be kicked into sixth gear, thanks to Cognitive Marketing. To explain what that is, let’s go back a bit: remember when Google’s DeepMind AlphaGo bested the top human player at the game? This wasn’t some computer beating a bored office clerk at the game of Solitaire. In order to achieve that victory, Google’s AI had to “actually show its cognitive capability to ‘think’ like humans, because to win the game, ‘intuition’ was needed rather than just ‘logical reasoning’.” Similar algorithm-powered AI’s are enabling machines to learn and grow on their own. Soon, they’ll reach the potential to create content for marketeers at a massive scale. Not only that, but they’ll always deliver the right content, to the right kind of audience, at just the right time.

More Ways Than One: How Retail Is Harnessing AI & ML

  1. Developing Store That Don’t Need Checkout Lines

Tech companies and online retail giants such as Amazon want to create cashier-free stores, at least they are trying to. Last year Amazon launched its Amazon Go which uses sensors and hundreds of cameras to track what customers pick up and then charge the amount to an application on their smart phone, put simply. But only months into the experiment Amazon has said they need to work out some kinks in the system. As of now, Amazon Go’s system can only handle 20 or so customers at a time.

Among other issues, The Guardian, citing an unnamed source, wrote in an article, stated “…if an item has been moved from its specific spot on the shelf.”  Located in Seattle, Washington, Amazon Go is now running in “beta mode” only for Amazon employees as it tests its systems. And these tests are showing that Amazon’s attempt at a cashier-free brick-and-mortar convenience store is far from ready for the real world. A Journal report stated, “For now, the technology functions flawlessly only if there are a small number of customers present, or when their movements are slow.”

  1. Could Drones Be Delivering Goods to Your Home One Day?

Imagine ordering something online from, let’s say, Amazon, and it arrives at your door in 30 minutes or so via drone. Does that sound like something out of the movie The Fifth Element? Maybe, but this technology is already is already here.

Amazon Prime Air made its first delivery to a customer via a GPS-guided flying drone on December 7th, 2016. It only took 13 minutes for the drone to deliver the merchandise to the customer. This sort of technology will be a huge game changer for retail. The supply chain industry is headed for a revolution – drone delivery is coming, and retailers who want to keep up really should adopt such technologies.

Even in 2016, consumers were totally ready to accept drone delivery. The Walk Sands Future of Retail 2016 Study showed that 79 percent of US consumers said they would be “very likely” or “somewhat likely” to choose drone delivery if their product could be delivered within an hour. For me, I’d choose it just to see how cool it was. I think it would be pretty rad to have a drone land in my yard with my package, don’t you? Furthermore, other consumers stated they would pay up to $10 for a drone delivery. Lastly, 26 percent of consumers are already expecting to have their packages delivered to them in the next two years or so.

Driverless Delivery Vehicles Already Here as Well

There was a movie I watched some months ago – you most likely heard of it or even watched it. It was the latest movie about Wolverine titled Logan. There was a certain scene that never left my memory (basically because I found it awesome) where Logan and his companions were driving along a freeway full of driverless tractor trailers that had no tractor.

In an article written for pastemagazine.com, Carlos Alvarez of Getty wrote: “… Logan’s writer and director James Mangold’s inclusion of the self-driving trucking machines make it clear that the filmmaker understands the writing on the wall about the future of shipping. It’s a future without truck drivers.” He continues to explain that the movie takes place a little over 10 years from now in 2029.

“The change may well be here long before 2029. It’s only 2017, and already we’re seeing the beginnings of automated trucking taking over the industry. At the 2017 Consumer Electronics Show this January, Peloton Technology demonstrated “platooning,” where trucks are kept in a row on the highway to reduce wind resistance and save fuel. The trucks are controlled by computers on a “Level One” of autonomous driving,” Alvarez continued in his article.

Now in Germany, Mercedes-Benz is has been developing and testing their Actros truck which is fitted with a ‘highway pilot’ system, which acts like an auto-pilot and includes a radar and stereo camera system. So far, German carmaker Daimler has restricted testing on a German autobahn. The autobahn is generally safer than testing in city conditions since the curves are not as steep. Since the tests have started, this autonomous truck has already driven over 20,000 kilometres.

Did I Say Flying Taxis? Huh, Yeah I Did!

But, if you are still not amazed, then I am about to blow your socks off. Dubai has promised to build a fully autonomous public transportation system by 2030, including autonomous flying drone taxis! Now that is really something. And it isn’t a matter of when they’ll be produced and in use because they already are.

Manufactured in China by the drone-making firm EHang, these really freaking cool quad drones on steroids can carry one person weighing up to 100 kilogrammes (I weigh over that, guess I’m walking) plus maybe a backpack or suitcase. They can fly about 30 kilometres (or 19 miles), at a speed of 60 miles per hour, give or take. And, if that isn’t the cool part, you won’t need any lessons on how to fly it. Simply push a button and it flies you from point A to point B. Whether or not you have to give it directions, don’t know. Either way, this is mostly likely the coolest piece of tech out there right now.

Copyright @ CBS Interactive Inc.

A “Dialogue” on the recent advances in Conversational Artificial Intelligence (AI)

How important is it to interact, converse and emote in a world that is getting closed and parochial? Conversational Artificial Intelligence (AI) offers a leeway to build agents that have the capability to learn and respond like humans and thereby align in bringing the long term goal of General AI to fruition.

Conversation with artificial assistants, be it Microsoft’s Cortana, Apple’s Siri, Google Now or Amazon’s Alexa is gaining prominence in the last few years. So lay back, relax and enjoy the simple conversational interface at offer, as I take you through a short tour!

In this 2 part blog-series, I cover the latest developments in the field of Dialogue and conversational Artificial Intelligence (AI). I give a brief overview of the current developments from this field, the many Language Understanding tools in the market and in particular, review one of them – IBM Conversation.

It’s a rat race – So act and don’t over think!

After the horrors of Tay tweets -Microsoft’s conversational AI tweet bot that was eventually rolled back due to its racist and sexist tweets early this year, AI enthusiasts have had some good news over the last few months.

nycitizen07-tweet

Microsoft hurried the launch of Tay tweets, its conversational AI bot only to shun it completely.

The Amazon Echo, Google’s Home and the smart home hub Apple has been preparing are good examples of how big companies are fighting tooth and nail to secure a place on your smart space. Here’s what Francis Chollet, researcher at Google and author of the popular framework – Keras has to say,

Whatever idea you started working on last week, a few other teams have probably been working on it for a month and are about to publish.
— François Chollet (@fchollet) October 5, 2016

Alexa Prize Competition

Just 4 weeks back, Amazon announced the Alexa Prize, an annual competition for university students dedicated to accelerating the field of conversational AI. This inaugural competition focuses on creating a social bot, using the Alexa Skills Kit (ASK) to converse coherently and engaging with humans on popular topics and news events. This gives student developer teams to explore a plethora of advanced topics in the realm of AI that include knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. With a huge cash prize at stake, goodies at offer and support from the ASK team it would be worth an experience to build a socially coherent bot!  The last date of team submissions is October 28, 2016 and more details about the application process can be found here.

Say Allo!

Google Allo, a smart messaging app that has personalized recommendations with the Google Assistant to express yourself better with stickers, doodles, and HUGE emojis & text. Allo also allows you to get help from your Google Assistant without leaving the conversation. A one to one conversation can be initiated with your Assistant which gets better as you use it more by addressing it with the @google tag. More functional details on the blog Say hello to Google Allo: a smarter messaging app

IBM Pepper developer Conference

The IBM BusinessConnect 2016 on 4th October 2016 in Stockholm, Sweden showcased some of IBM Watson powered tools, and applications in humanoid robot of Pepper.

Yesterdays #IBMBCSE at Stockholm Waterfront was fantastic thanks to all IBMers, partners and customers, and thanks to #Pepper of course! pic.twitter.com/quZuaptu8Z
— IBM ClientCtr Nordic (@IBMCCNordic) October 5, 2016

IBM’s Pepper is powered by SoftBank robot and uses IBM Watson technology at its core.

Banzai! (Live long) – Watch this first home robot commercial as the unforeseen future is coming!

The Watson Developer Conference is packed with technical talks, hands-on labs, and coding challenges to get you working with the tools that will make you a sought after developer and is going to be held in San Francisco from 9th to 10th November this year.

ibm-robot

The IBM Global Industry Solution is located in Nice, France.

Joie de vivre – Samsung buys Viv

And after Google’s Allo and IBM’s pepper it was Samsung to jump into the Dialogue based conversational AI bandwagon as it acquired Viv, creators of Apple’s Siri. Viv is a more powerful version to Siri that brings in ubiquity. With its self-generating software that is capable of writing its own code to accomplish new tasks and by dynamic program generation, Viv handles new user tasks and build plans on the fly!

In its demo video on “Beyond Siri: The World Premiere of Viv with Dag Kittlaus” (as in the embedded link/video below) earlier this year, Viv was eventually be partnered or sold to a mobile device.

With everyone wanting to invest heavily, the question was who and when! Hence, this announcement from Samsung doesn’t come as a big surprise.

Viv will ultimately provide services to Samsung and its platforms but remain an independent entity. Samsung hopes to disrupt the mobile market share with this acquisition. It can extend it to other home devices, after all it had purchased SmartThings for around $200M back in 2014. More details on the acquisition here: Samsung acquires Viv, a next-gen AI assistant built by the creators of Apple’s Siri

Don’t take it slow because there is Ozlo !

Ozlo launched few days back on iOS and the web is another of the many sprouting AI assistants which uses good memory of one’s previous interactions. Ozlo, at least by its name attempts to be different than all assistants of its competitors in the market at present that use repetitive female names. The best thing is that it is integrated with a plethora of services like Yelp, TripAdvisor,IMDB, among many others and use  Further Food, Authority Nutrition, Cookies, etc. to provide nutritional guidance. This is a huge boost than all of its rival companies which tend to prioritize their own services rather than integrating with existing services. An in-depth review can be found here: Ozlo AI assistant is the new underdog filling the void left by Viv

And there were rumors that Apple is going to buy McLaren, which set the eyeballs rolling as a big tech giant was entering a completely new domain of automobile industry and would lead others like Google, Microsoft and IBM to follow suit and invest heavily!

Conference workshops also wanting a dialogue!

There are in total 50 workshops at NIPS 2016 this year covering a range of different Machine Learning topics.

  1. The Dialog workshop, scheduled on the 10th of December focuses on building agents capable of mutually coordinating with humans via communication. And given the tremendous economic potential of the ability to converse intimately transcends to the overall goal of AI.
    For the call for papers, the deadline is extended to the midnight of October 23, 2016 and more details about the workshop schedule can be found at the chair website LET’S DISCUSS: LEARNING METHODS FOR DIALOGUE NIPS 2016 WORKSHOP The papers are on the below three high-level areas

    • Being data-driven especially the offline/online evaluation
    • Build complete applications or end-to-end systems
    • Model innovation to incorporate linguistic knowledge into the architecture
  1. Another workshop on Interactive machine learning (IML) is to be held on the 9th of December. It focuses on the adaptable collaboration of how autonomous agents solve a task by making use of interactions with humans. Designing and engineering fully autonomous agents is a difficult and there is a compelling need for IML algorithms that enable artificial and human agents to collaborate and solve independent or shared goals.
    The call for papers explores new ideas in interactive learning, reports on research in progress as well as discussions of open problems and challenges facing interactive machine learning with particular interest in the research on the practical application of interactive learning systems (for robotics, virtual agents, dialog systems, among others), and the ability of these systems to handle the complexity of real world problems. More details about the application process, requirements, application deadline, etc. is at the workshop portal Future of Interactive Learning Machines Workshop (FILM at NIPS 2016)

In the next part of this series on Conversational AI, I would cover the basics behind Language Understanding tools in the market that enable to build a Dialogue system.

Read the second Part here: A review of Language Understanding tools – IBM Conversation