Severity of lockdowns and how they are reflected in mobility data

The global spread of the SARS-CoV-2 at the beginning of March 2020 forced majority of countries to introduce measures to contain the virus. The governments found themselves facing a very difficult tradeoff between limiting the spread of the virus and bearing potentially catastrophic economical costs of a lockdown. Notably, considering the level of globalization today, the response of countries varied a lot in severity and response latency. In the overwhelming amount of media and social media information feed a lot of misinformation and anecdotal evidence surfaced and remained in people’s mind. In this article, I try to have a more systematic view on the topics of severity of response from governments and change in people’s mobility due to the pandemic.

I want to look at several countries with different approach to restraining the spread of the virus. I will look at governmental regulations, when, and how they were introduced. For that I am referring to an index called Oxford COVID-19 Government Response Tracker (OxCGRT)[1]. The OxCGRT follows, records, and rates the actions taken by governments, that are available publicly. However, looking just at the regulations and taking them for granted does not provide that we have the whole picture. Therefore, equally interesting is the investigation of how the recommended levels of self-isolation and social distancing is reflected in the mobility data and we will look at it first.

The mobility dataset

The mobility data used in this article was collected by Google and made freely accessible[2]. The data reflects how the number of visits and their length changed as compared to a baseline from before the pandemic. The baseline is the median value for the corresponding day of the week in the period from 3.01.2020 – 6.02.2020. The dataset contains data in six categories. Here we look at only 4 of them: public transport stations, places of residence, workplaces, and retail/recreation (including shopping centers, libraries, gastronomy, culture). The analysis intentionally omits parks (public beaches, gardens etc.) and grocery/pharmacy category. Mobility in parks is excluded due to huge weather change confound. The baseline was created in winter and increased/decreased (depending on the hemisphere) activity in parks is expected as the weather changes. It would be difficult to detangle tis change from the change caused by the pandemic without referring to a different baseline. The grocery shops and pharmacies are excluded because the measures regarding the shopping were very similar across the countries.

Amid the Covid-19 pandemic a lot of anecdotal information surfaced, that some countries, like Sweden, acted completely against the current by not introducing a lockdown. It was reported that there were absolutely no restrictions and Sweden can be basically treated as a control group for comparing the different approaches to lockdown on the spread of the coronavirus. Looking at the mobility data (below), we can see however, that there was a change in the mobility of Swedish citizens in comparison to the baseline.

Fig. 1 Moving average (+/- 6 days) of the mobility data in Sweden in four categories.

Fig. 1 Moving average (+/- 6 days) of the mobility data in Sweden in four categories.

Looking at the change in mobility in Sweden, we can see that the change in the residential areas is small, but it is indicating some change in behavior. A change in the retail and recreational sector is more noticeable. Most interestingly it is approaching the baseline levels at the beginning of June. The most substantial changes, however, are in the workplaces and transit categories. They are also much slower to come back to the baseline, although a trend in that direction starts to be visible.

Next, let us have a look at the change in mobility in selected countries, separately for each category. Here, I compare Germany, Sweden, Italy, and New Zealand. (To see the mobility data for other countries visit https://covid19.datanomiq.de/#section-mobility).

Fig. 2 Moving average (+/- 6 days) of the mobility data.

Fig. 2 Moving average (+/- 6 days) of the mobility data.

Looking at the data, we can see that the change in mobility in Germany and Sweden was somewhat similar in orders of magnitude, in comparison to changes in mobility in countries like Italy and New Zealand. Without a doubt, the behavior in Sweden changed the least from the baseline in all the categories. Nevertheless, claiming that people’s reaction to the pandemic in Sweden in Germany were polar opposites is not necessarily correct. The biggest discrepancy between Sweden and Germany is in the retail and recreation sector out of all categories presented. The changes in Italy and New Zealand reached very comparable levels, but in New Zealand they seem to be much more dynamic, especially in approaching the baseline levels again.

The government response dataset

Oxford COVID-19 Government Response Tracker records regulations from number of countries, rates them and categorizes into a few indices. The number between 1 and 100 reflects the level of the action taken by a government. Here, I focus on the Containment and Health sub-index that includes 11 indicators from categories: containment and closure policies and health system policies[3]. The actions included in the index are for example: school and workplace closing, restrictions on public events, travel restrictions, public information campaigns, testing policy and contact tracing.

Below, we look at a plot with the Containment and Health sub-index value for the four aforementioned countries. Data and documentation is available here[4]

Fig. 3 Oxford COVID-19 Government Response Tracker, the Containment and Health sub-index.

Fig. 3 Oxford COVID-19 Government Response Tracker, the Containment and Health sub-index.

Here the difference between Sweden and the other countries that we are looking at becomes more apparent. Nevertheless, the Swedish government did take some measures in order to condemn the spread of the SARS-CoV-2. At the highest, the index reached value 45 points in Sweden, 73 in Germany, 92 in Italy and 94 in New Zealand. In all these countries except for Sweden the index started dropping again, while the drop is the most dynamic in New Zealand and the index has basically reached the level of Sweden.

Conclusions

As we have hopefully seen, the response to the COVID-19 pandemic from governments differed substantially, as well as the resulting change in mobility behavior of the inhabitants did. However, the discrepancies were probably not as big as reported in the media.

The overwhelming presence of the social media could have blown some of the mentioned differences out of proportion. For example, the discrepancy in the mobility behavior between Sweden and Germany was biggest in recreation sector, that involves cafes, restaurants, cultural resorts, and shopping centers. It is possible, that those activities were the ones that people in lockdown missed the most. Looking at Swedes, who were participating in them it was easy to extrapolate on the overall landscape of the response to the virus in the country.

It is very hard to say which of the world country’s approach will bring the best effects for the people’s well-being and the economies. The ongoing pandemic will remain a topic of extensive research for many years to come. We will (most probably) eventually find out which approach to the lockdown was the most optimal (or at least come close to finding out). For the time being, it is however important to remember that there are many factors in play and looking into one type of data might be misleading. Comparing countries with different history, weather, political and economic climate, or population density might be misleading as well. But it is still more insightful than not looking into the data at all.

[1] Hale, Thomas, Sam Webster, Anna Petherick, Toby Phillips, and Beatriz Kira (2020). Oxford COVID-19 Government Response Tracker, Blavatnik School of Government. Data use policy: Creative Commons Attribution CC BY standard.

[2] Google LLC “Google COVID-19 Community Mobility Reports”. https://www.google.com/covid19/mobility/ retrived: 04.06.2020

[3] See documentation https://github.com/OxCGRT/covid-policy-tracker/tree/master/documentation

[4] https://github.com/OxCGRT/covid-policy-tracker  retrieved on 04.06.2020

 How Text to Speech Voices Are Used In Data Science

To speak on voices, text to speech platforms are bringing versatility to a new scale by implementing voices that sound more personal and less like a robot. As these services gain traction, vocal quality, and implementation improve to give sounds that feel like they’re speaking to you from a human mouth.

The intention of most text to speech platforms have always been to provide experiences that users feel comfortable using. Voices are a huge part of that, so great strides have been taken to ensure that they sound right.

How Voices are Utilized

Voices in the text to speech are generated by a computer itself. As the computer-generated voices transcribe the text into oral responses, they make up what we hear as dialogue read to us. These voice clips initially had the problem of sounding robotic and unpersonable as they were pulled digitally together. Lately, though, the technology has improved to bring faster response time in transcribing words, as well as seamlessly stringing together. This has brought the advantage of making a computer-generated voice sound much more natural and human. As people seek to connect more with the works they read, having a human-sounding voice is a huge step in letting listeners relate to their works.

To give an example of where this works, you might have a GPS in your car. The GPS has a function where it will transcribe the car’s route and tell you each instruction. Some GPS companies have made full use of this feature and added fun voices to help entertain drivers. These include Darth Vader and Yoda from Star Wars or having Morgan Freeman and Homer Simpson narrate your route. Different voice types are utilized in services depending on the situation. Professional uses like customer service centers will keep automated voices sounding professional and courteous when assisting customers. Educational systems will keep softer and kinder sounding voices to help sound more friendly with students.

When compared to older solutions, the rise of vocal variety in Text to Speech services has taken huge leaps as more people see the value of having a voice that they can connect to. Expressive voices and emotional variance are being applied to voices to help further convey this, with happy or sad sounding voices being implemented wherever appropriate. As time goes on, these services will get better at the reading context within sentences to apply emotion and tone at the correct times, and improve overall vocal quality as well. These reinvent past methods by advancing the once static and robotic sounds that used to be commonplace among text to voice services.

More infrastructures adopt these services to expand their reach to consumers who might not have the capabilities to utilize their offerings.  Having clear and relatable voices matter because customers and users will be drawn to them considerably more than if they chose not to offer them at all. In the near future text to speech voices will develop even further, enhancing the way people of all kinds connect to the words they read.

Data Analytics & Artificial Intelligence Trends in 2020

Artificial intelligence has infiltrated all aspects of our lives and brought significant improvements.

Although the first thing that comes to most people’s minds when they think about AI are humanoid robots or intelligent machines from sci-fi flicks, this technology has had the most impressive advancements in the field of data science.

Big data analytics is what has already transformed the way we do business as it provides an unprecedented insight into a vast amount of unstructured, semi-structured, and structured data by analyzing, processing, and interpreting it.

Data and AI specialists and researchers are likely to have a field day in 2020, so here are some of the most important trends in this industry.

1. Predictive Analytics

As its name suggests, this trend will be all about using gargantuan data sets in order to predict outcomes and results.

This practice is slated to become one of the biggest trends in 2020 because it will help businesses improve their processes tremendously. It will find its place in optimizing customer support, pricing, supply chain, recruitment, and retail sales, to name just a few.

For example, Amazon has already been leveraging predictive analytics for its dynamic pricing model. Namely, the online retail giant uses this technology to analyze the demand for a particular product, competitors’ prices, and a number of other parameters in order to adjust its price.

According to stats, Amazon changes prices 2.5 million times a day so that a particular product’s cost fluctuates and changes every 10 minutes, which requires an extremely predictive analytics algorithm.

2. Improved Cybersecurity

In a world of advanced technologies where IoT and remotely controlled devices having top-notch protection is of critical importance.

Numerous businesses and individuals have fallen victim to ruthless criminals who can steal sensitive data or wipe out entire bank accounts. Even some big and powerful companies suffered huge financial and reputation blows due to cyber attacks they were subjected to.

This kind of crime is particularly harsh for small and medium businesses. Stats say that 60% of SMBs are forced to close down after being hit by such an attack.

AI again takes advantage of its immense potential for analyzing and processing data from different sources quickly and accurately. That’s why it’s capable of assisting cybersecurity specialists in predicting and preventing attacks.

In case that an attack emerges, the response time is significantly shorter, so that the worst-case scenario can be avoided.

When we’re talking about avoiding security risks, AI can improve enterprise risk management, too, by providing guidance and assisting risk management professionals.

3. Digital Workers

In 2020, an army of digital workers will transform the traditional workspace and take productivity to a whole new level.

Virtual assistants and chatbots are some examples of already existing digital workers, but it will be even more of them. According to research, this trend is one the rise, as it’s expected that AI software and robots will increase by 50% by 2022.

Robots will take over even some small tasks in the office. The point is to streamline the entire business process, and that can be achieved by training robots to perform small and simple tasks like human employees. The only difference will be that digital workers will do that faster and without any mistakes.

4. Hybrid Workforce

Many people worry that AI and automation will steal their jobs and render them unemployed.

Even the stats are bleak – AI will eliminate 1.8 million jobs. But, on the other hand, it will create 2.3 million new jobs.

So, our future is actually AI and humans working together, and that’s what will become the business normalcy in 2020.

Robotic process automation and different office digital workers will be in charge of tedious and repetitive tasks, while more sophisticated issues that require critical thinking and creativity will be human workers’ responsibility.

One of the most important things about creating this hybrid workforce is for businesses to openly discuss it with their employees and explain how these new technologies will be used. A regular workforce has to know that they will be working alongside machines whose job will be to speed up the processes and cut costs.

5. Process Intelligence

This AI trend will allow businesses to gain insight into their processes by using all the information contained in their system and creating an overall, real-time, and accurate visual model of all the processes.

What’s great about it is that it’s possible to see these processes from different perspectives – across departments, functions, staff, and locations.

With such a visual model, it’s possible to properly analyze these processes, identify potential bottlenecks, and eliminate them before they even begin to emerge.

Besides, as this is AI and data analytics at their best, this technology will also facilitate decision-making by predicting the future results of tech investments.

Needless to say, Process Intelligence will become an enterprise standard very soon, thanks to its ability to provide a better understanding and effective management of end-to-end processes.

As you can see, in 2020, these two advanced technologies will continue to evolve and transform the business landscape and change it for the better.

Interview – There is no stand-alone strategy for AI, it must be part of the company-wide strategy

Ronny FehlingRonny Fehling is Partner and Associate Director for Artificial Intelligence as the Boston Consulting Group GAMMA. With more than 20 years of continually progressive experience in leading business and technology innovation, spearheading digital transformation, and aligning the corporate strategy with Artificial Intelligence he industry-leading organizations to grow their top-line and kick-start their digital transformation.

Ronny Fehling is furthermore speaker of the Predictive Analytics World for Industry 4.0 in May 2020.

Data Science Blog: Mr. Fehling, you are consulting companies and business leaders about AI and how to get started with it. AI as a definition is often misleading. How do you define AI?

This is a good question. I think there are two ways to answer this:

From a technical definition, I often see expressions about “simulation of human intelligence” and “acting like a human”. I find using these terms more often misleading rather than helpful. I studied AI back when it wasn’t yet “cool” and still middle of the AI winter. And yes, we have much more compute power and access to data, but we also think about data in a very different way. For me, I typically distinguish between machine learning, which uses algorithms and statistical methods to identify patterns in data, and AI, which for me attempts to interpret the data in a given context. So machine learning can help me identify and analyze frequency patterns in text and even predict the next word I will type based on my history. AI will help me identify ‘what’ I’m writing about – even if I don’t explicitly name it. It can tell me that when I’m asking “I’m looking for a place to stay” that I might want to see a list of hotels around me. In other words: machine learning can detect correlations and similar patterns, AI uses machine learning to generate insights.

I always wondered why top executives are so frequently asking about the definition of AI because at first it seemed to me not as relevant to the discussion on how to align AI with their corporate strategy. However, I started to realize that their question is ultimately about “What is AI and what can it do for me?”.

For me, AI can do three things really good, which humans cannot really do and previous approaches couldn’t cope with:

  1. Finding similar patterns in historical data. Imagine 20 years of data like maintenance or repair documents of a manufacturing plant. Although they describe work done on a multitude of products due to a multitude of possible problems, AI can use this to look for a very similar situation based on a current problem description. This can be used to identify a common root cause as well as a common solution approach, saving valuable time for the operation.
  2. Finding correlations across time or processes. This is often used in predictive maintenance use cases. Here, the AI tries to see what similar events happen typically at some time before a failure happen. This way, it can alert the operator much earlier about an impending failure, say due to a change in the vibration pattern of the machine.
  3. Finding an optimal solution path based on many constraints. There are many problems in the business world, where choosing the optimal path based on complex situations is critical. Let’s say that suddenly a severe weather warning at an airport forces an airline to have to change their scheduling because of a reduced airport capacity. Delays for some aircraft can cause disruptions because passengers or personnel not being able to connect anymore. Knowing which aircraft to delay, which to cancel, which to switch while causing the minimal amount of disruption to passengers, crew, maintenance and ground-crew is something AI can help with.

The key now is to link these fundamental capabilities with the business context of the company and how it can ultimately help transform.

Data Science Blog: Companies are still starting with their own company-wide data strategy. And now they are talking about AI strategies. Is that something which should be handled separately?

In my experience – both based on having seen the implementations of several corporate data strategies as well as my upbringing at Oracle – the data strategy and AI strategy are co-dependent and cannot be separated. Very often I hear from clients that they think they first need to bring their data in order before doing AI project. And yes, without good data access, AI cannot really work. In fact, most of the time spent on AI is spent on processing, cleansing, understanding and contextualizing the data. However, you cannot really know what data will be needed in which form without knowing what you want to use it for. This is why strategies that handle data and AI separately mostly fail and generate huge costs.

Data Science Blog: What are the important steps for developing a good data strategy? Is there something like a general approach?

In my eyes, the AI strategy defines the data strategy step by step as more use cases are implemented. Rather than focusing too quickly at how to get all corporate data into a data lake, it will be much more important to start creating a use-case, technology and data governance. This governance has to be established once the AI strategy is starting to mature to enable the scale up and productization. At the beginning is to find the (very few) use-cases that can serve as light house projects to demonstrate (1) value impact, (2) a way to go from MVP to Pilot, and (3) how to address the data challenge. This will then more naturally identify the elements of governance, data access and technology that are required.

Data Science Blog: What are the most common questions from business leaders to you regarding AI? Why do they hesitate to get started?

By far it the most common question I get is: how do I get started? The hesitations often come from multiple sources like: “We don’t have the talent in house to do AI”, “Our data is not good enough”, “We don’t know which use-case to start with”, “It’s not easy for us to embrace agile and failure culture because our products are mission critical”, “We don’t know how much value this can bring us”.

Data Science Blog: Most managers prefer to start small and with lower risk. They seem to postpone bigger ideas to a later stage, at least some milestones should be reached. Is that a good idea or should they think bigger?

AI is often associated (rightfully so) with a new way of working – agile and embracing failures. Similarly, there is also the perception of significant cost to starting with AI (talent, technology, data). These perceptions often lead managers wanting to start with several smaller ambition use-cases where failure isn’t that grave. Once they have proven itself somehow, they would then move on to bigger projects. The problem with this strategy is on the one side that you fragment your few precious AI resources on too many projects and at the same time you cannot really demonstrate an impact since the projects weren’t chosen based on their impact potential.

The AI pioneers typically were successful by “thinking big, starting small and scaling fast”. You start by assessing the value potential of a use-case, for example: my current OEE (Overall Equipment Efficiency) is at 65%. There is an addressable loss of 25% which would grow my top line by $X. With the help of AI experts, you then create a hypothesis of how you think you can reduce that loss. This might be by choosing one specific equipment and 50% of the addressable loss. This is now the measure against which you define your failure or non-failure criteria. Once you have proven an MVP that can solve this loss, you scale up by piloting it in real-life setting and then scaling it to all the equipment. At every step of this process, you have a failure criterion that is measured by the impact value.


Virtual Edition, 11-12 MAY, 2020

The premier machine learning
conference for industry 4.0

This year Predictive Analytics World for Industry 4.0 runs alongside Deep Learning World and Predictive Analytics World for Healthcare.

Image Source: Pixabay (https://pixabay.com/photos/classroom-school-education-learning-2093744/)

The Data Surrounding Higher Education and COVID-19

Just a few short weeks ago, it would have seemed impossible for some microscopic pathogen to upend our lives as we knew it, but the novel Coronavirus has proven us breathtakingly wrong.

It has suddenly and unexpectedly changed everything we had thought was most stable and predictable in our lives, from the ways that we work to the ways we interact with one another. It’s even changed the way we learn, as colleges and universities across the nation shutter their doors.

But what is the real impact of COVID-19 on higher education? How are college students really faring in the face of the pandemic, and what can we do to support them now and in the post-pandemic life to come?

The Scramble is On

Probably the most significant challenge that schools, educators, and students alike are facing is that no one really saw this coming, so now we’re trying to figure out how to protect students’ education while also protecting their physical health. We’re having to make decisions that impact millions of students and faculty and do that with no preparation whatsoever.

To make matters worse, faculties are having to convert their classes to a forum the majority have never even used before. Before the lockdown, more than 70% of faculty in higher education had zero experience with online teaching. Now they’re being asked to convert their entire semester’s course schedule from an in-class to an online format, and they’re having to do it in a matter of weeks if not days.

For students who’ve never taken a distance learning course before, these impromptu, online, cobbled-together courses are hardly the recipe for academic success. The challenge is even greater for lab-based courses, where content mastery depends on hands-on work and laboratory applications. To solve this problem, some of the newly-minted distance ed instructors are turning to online lab simulations to help students make do until the real thing is open to them again.

Making Do

It’s not just the schools and the faculty that have been caught off guard by the sudden need to learn while under lockdown. Students are also having to hustle to make sure they have the technology they need to move their college experience online. Unfortunately, for many students, that’s not always easy, and for some, it’s downright impossible.

Studies show that large swaths of the student population: first-generation college students, community college students, immigrants, and lower-income students, typically rely on on-campus facilities to access the technology they need to do their work. When physical campuses close and the community libraries and hotspots with them, so too does the chance for many students to take their learning online.

Students in urban environments face particular risks. Even if they are able to access the technology they need to engage in distance learning, they may find it impossible to socially isolate. The need to access a hotspot or wi-fi connection might put them in unsafe proximity to other students, not to mention the millions of workers now forced to telecommute.

The Good News

America’s millions of new online learners and teachers may have a tough row to hoe, but the news isn’t all bad. Online education is by no means a new thing. By 2017, nearly 7 million students were enrolled in at least one distance education course according to a recent survey by the National Center for Education Statistics.

It isn’t as though the technology to provide a secure, user-friendly learning experience doesn’t exist. The financial industry, for example, has played a leading role in developing private, responsive, and highly-customizable technology solutions to meet practically any need a client or stakeholder may have.

The solutions used for the financial sector can be built on and modified for the online learning experience to ensure the privacy of students, educators, and institutions while providing real-time access to learning tools and content to classmates and teachers.

A New Path?

As challenging as it may be, transitioning to online learning not only offers opportunities for the present, but it may well open up new paths for the future. While our world may finally be approaching the downward slope of the curve and while we may be seeing the light at the end of the tunnel, until there’s a vaccine, we haven’t likely seen the last of COVID-19.

And even when we lay the COVID beast to rest, infectious disease, unfortunately, is a fact of human life. For students just starting to think about their career paths, this lockdown may well be the push they need to find a career that’s well-suited to this “new normal.”

For instance, careers in data science transition perfectly from onsite to at-home work, and as epidemiological superheroes like Dr. Fauci and Dr. Birx have shown, they are often involved in important, life-saving work. These are also careers that can be pursued largely, if not exclusively, online. Whether you’re a complete newbie or a veteran to the field, there is a large range of degree and certification programs available online to launch or advance your data science career.

It might be that your college-with-corona experience is pointing your life in a different direction, toward education rather than data science. With a doctorate in education, your future career path is virtually unlimited. You might find yourself teaching, researching, leading universities or developing education policy.

What matters most is that with an EdD, you can make a difference in the lives of students and teachers, just as your teachers and administrators are making a difference in your life. You can be the guiding and comforting force for students in a time of crisis and you can use your experiences today to pay it forward tomorrow.

Optimize AI Talent: Perception from Across the Globe

Despite the AI hype, the AI skill gap is turning into some pariah while businesses are accelerating to become demigods.

Reports from the “Global Talent Competitiveness Index (GTCI) 2020” cover multiple parameters both national and organizational to generate insight for further action. This report compiles 70 variables including 132 national economies across the globe – based on all groups of income and at every developmental level.

The sole purpose of the GTCI report is to narrow down the skill gap by delivering the right data inputs. The figures mentioned in the report could be of value to private and public organizations.

GTCI report covered multiple themes that need to be addressed: –

As the race to embrace AI spurs, it is evident to address the challenges faced due to AI and how best these problems can be solved.

The pace at which AI is developing is transforming the way we work, forcing a technology shift, change in the corporate structure, changing the innovation system for AI professionals in every possible way.

There’s more that is needed to be done as AI and automation continue to affect the way we work.

  • Reskilling in workplaces to eliminate dearth of talent

As the role in AI keeps evolving, organizations need a larger workforce, especially to play technology roles such as AI engineers and AI specialists. Looking closely at the statistics you may not fail to notice that the number of AI job roles is on the rise, but there’s scarce talent.

Employers must take on reskilling as a critical measure. Else how will the technology market keep up with changing trends? Reskilling in the form of training or AI certifications should be emphasized. Having an in-house AI talent is an added advantage to the company.

  • Skill gap between growing countries (low performing and high performing) are widening

Based on the GTCI report, it is seen there is a skill gap happening not only across industries but between nations. The report also highlights which country lacks basic digital skills, and this highly gets contributed toward a digital divide between nations.

  • High-level of cooperation needed to embrace AI benefits

As much as the world shows concern toward embracing AI, not much has been done to achieve these transformations. And AI has huge potential to transform society and make it a better place to live. However, to embrace these benefits, corporations must engage in AI regulation.

From a talent acquisition perspective, this simply means employers will need more training and reskilling opportunities.

  • AI to allow nations to skip generations

On a technological front, AI makes it possible to skip generations in developed nations. Although, not common due to structural obstruction.

  • Cities are now competing to become talent magnets and AI hubs

As AI continues to hit the market, organizations are aggressively coming up with newer policies to attract and retain AI professionals.

No doubt, cities are striving to attract the right kind of talent as competition keeps increasing. As such many cities are competing in becoming core AI engines in transforming energy grids, transportation, and many other multiple segments. Cities are now becoming the main test beds for AI-based tools i.e. self-driven vehicles, tele-surveillance, and facial recognition.

  • Sustainable AI comes when the society is equally up for it

With certain communities not adopting and accepting the advent of AI, it is difficult to say whether these communities will not try to distort AI narratives. As a result, it is crucial for multiple stakeholders to embrace AI and developed the AI workforce in parallel.

Not to forget, regulators and policy-makers have an equal role to play to ensure there’s a smooth transition in jobs. As AI-induced transformation skyrockets, educators and leaders need to move quickly as the new generations’ complete focus is entirely based on doing their bit to the society.

Two decades passed ever since McKinsey declared the war for talent – particularly for high-performing employees. As organizations are extensively looking to hire the right talent, it is imperative to retain and attract talent at large.

Despite the unprecedented growth in AI technologies, it is near to being unanimous regarding having hold of organizations to master in AI, forget about retaining talent. They’re not even getting better at it.

Even top tech companies such as Google and Amazon, the demand for top talent outstrips the supply. Although you may find thousands of candidates applying for the same job role, the competition just gets tougher since such employers are tough nuts and pleasing them is not an easy task.

If these tech giants are finding it difficult to hire the right talent, you could imagine the plight of other companies.

Given the optimistic view regarding the technology future, it is much more challenging to convince that the war for talent truly resembles the war on talent.

The good news is organizations that look forward to adopting new technology and reskill their employees will most likely thrive in the competitive edge.

Interview – Customer Data Platform, more than CRM 2.0?

Interview with David M. Raab from the CDP Institute

David M. Raab is as a consultant specialized in marketing software and service vendor selection, marketing analytics and marketing technology assessment. Furthermore he is the founder of the Customer Data Platform Institute which is a vendor-neutral educational project to help marketers build a unified customer view that is available to all of their company systems.

Furthermore he is a Keynote-Speaker for the Predictive Analytics World Event 2019 in Berlin.

Data Science Blog: Mr. Raab, what exactly is a Customer Data Platform (CDP)? And where is the need for it?

The CDP Institute defines a Customer Data Platform as „packaged software that builds a unified, persistent customer database that is accessible by other systems“.  In plainer language, a CDP assembles customer data from all sources, combines it into customer profiles, and makes the profiles available for any use.  It’s important because customer data is collected in so many different systems today and must be unified to give customers the experience they expect.

Data Science Blog: Is it something like a CRM System 2.0? What Use Cases can be realized by a Customer Data Platform?

CRM systems are used to interact directly with customers, usually by telephone or in the field.  They work almost exclusively with data that is entered during those interactions.  This gives a very limited view of the customer since interactions through other channels such as order processing or Web sites are not included.  In fact, one common use case for CDP is to give CRM users a view of all customer interactions, typically by opening a window into the CDP database without needing to import the data into the CRM.  There are many other use cases for unified data, including customer segmentation, journey analysis, and personalization.  Anything that requires sharing data across different systems is a CDP use case.

Data Science Blog: When does a CDP make sense for a company? It is more relevant for retail and financial companies than for industrial companies, isn´t it?

CDP has been adopted most widely in retail and online media, where each customer has many interactions and there are many products to choose from.  This is a combination that can make good use of predictive modeling, which benefits greatly from having more complete data.  Financial services was slower to adopt, probably because they have fewer products but also because they already had pretty good customer data systems.  B2B has also been slow to adopt because so much of their customer relationship is handled by sales people.  We’ve more recently been seeing growth in additional sectors such as travel, healthcare, and education.  Those involve fewer transactions than retail but also rely on building strong customer relationships based on good data.

Data Science Blog: There are several providers for CDPs. Adobe, Tealium, Emarsys or Dynamic Yield, just to name some of them. Do they differ a lot between each other?

Yes they do.  All CDPs build the customer profiles I mentioned.  But some do more things, such as predictive modeling, message selection, and, increasingly, message delivery.  Of course they also vary in the industries they specialize in, regions they support, size of clients they work with, and many technical details.  This makes it hard to buy a CDP but also means buyers are more likely to find a system that fits their needs.

Data Science Blog: How established is the concept of the CDP in Europe in general? And how in comparison with the United States?

CDP is becoming more familiar in Europe but is not as well understood as in the U.S.  The European market spent a lot of money on Data Management Platforms (DMPs) which promised to do much of what a CDP does but were not able to because they do not store the level of detail that a CDP does.  Many DMPs also don’t work with personally identifiable data because the DMPs primarily support Web advertising, where many customers are anonymous.  The failures of DMPs have harmed CDPs because they have made buyers skeptical that any system can meet their needs, having already failed once.  But we are overcoming this as the market becomes better educated and more success stories are available.  What’s the same in Europe and the U.S. is that marketers face the same needs.  This will push European marketers towards CDPs as the best solution in many cases.

Data Science Blog: What are coming trends? What will be the main topic 2020?

We see many CDPs with broader functions for marketing execution: campaign management, personalization, and message delivery in particular.  This is because marketers would like to buy as few systems as possible, so they want broader scope in each systems.  We’re seeing expansion into new industries such as financial services, travel, telecommunications, healthcare, and education.  Perhaps most interesting will be the entry of Adobe, Salesforce, and Oracle, who have all promised CDP products late this year or early next year.  That will encourage many more people to consider buying CDPs.  We expect that market will expand quite rapidly, so current CDP vendors will be able to grow even as Adobe, Salesforce, and Oracle make new CDP sales.


You want to get in touch with Daniel M. Raab and understand more about the concept of a CDP? Meet him at the Predictive Analytics World 18th and 19th November 2019 in Berlin, Germany. As a Keynote-Speaker, he will introduce the concept of a Customer Data Platform in the light of Predictive Analytics. Click here to see the agenda of the event.

 


 

The Data Scientist Job and the Future

A dramatic upswing of data science jobs facilitating the rise of data science professionals to encounter the supply-demand gap.

By 2024, a shortage of 250,000 data scientists is predicted in the United States alone. Data scientists have emerged as one of the hottest careers in the data world today. With digitization on the rise, IoT and cognitive technologies have generated a large number of data sets, thus, making it difficult for an organization to unlock the value of these data.

With the constant rise in data science, those fail to upgrade their skill set may be putting themselves at a competitive disadvantage. No doubt data science is still deemed as one of the best job titles today, but the battles for expert professionals in this field is fierce.

The hiring market for a data science professional has gone into overdrive making the competition even tougher. New online institutions have come up with credible certification programs for professionals to get skilled. Not to forget, organizations are in a hunt to hire candidates with data science and big data analytics skills, as these are the top skills that are going around in the market today. In addition to this, it is also said that typically it takes around 45 days for these job roles to be filled, which is five days longer than the average U.S. market.

Data science

One might come across several definitions for data science, however, a simple definition states that it is an accumulation of data, which is arranged and analyzed in a manner that will have an effect on businesses. According to Google, a data scientist is one who has the ability to analyze and interpret complex data, being able to make use of the statistic of a website and assist in business decision making. Also, one needs to be able to choose and build appropriate algorithms and predictive models that will help analyze data in a viable manner to uncover positive insights from it.

A data scientist job is now a buzzworthy career in the IT industry. It has driven a wider workforce to get skilled in this job role, as most organizations are becoming data-driven. It’s pretty obnoxious being a data professional will widen job opportunities and offer more chances of getting lucrative salary packages today. Similarly, let us look at a few points that define the future of data science to be bright.

  • Data science is still an evolving technology

A career without upskilling often remains redundant. To stay relevant in the industry, it is crucial that professionals get themselves upgraded in the latest technologies. Data science evolves to have an abundance of job opportunities in the coming decade. Since, the supply is low, it is a good call for professionals looking to get skilled in this field.

  • Organizations are still facing a challenge using data that is generated

Research by 2018 Data Security Confidence from Gemalto estimated that 65% of the organizations could not analyze or categorized the data they had stored. However, 89% said they could easily analyze the information prior they have a competitive edge. Being a data science professional, one can help organizations make progress with the data that is being gathered to draw positive insights.

  • In-demand skill-set

Most of the data scientists possess to have the in-demand skill set required by the current industry today. To be specific, since 2013 it is said that there has been a 256% increase in the data science jobs. Skills such as Machine Learning, R and Python programming, Predictive analytics, AI, and Data Visualization are the most common skills that employers seek from the candidates of today.

  • A humongous amount of data growing everyday

There are around 5 billion consumers that interact with the internet on a daily basis, this number is set to increase to 6 billion in 2025, thus, representing three-quarters of the world’s population.

In 2018, 33 zettabytes of data were generated and projected to rise to 133 zettabytes by 2025. The production of data will only keep increasing and data scientists will be the ones standing to guard these enterprises effectively.

  • Advancement in career

According to LinkedIn, data scientist was found to be the most promising career of 2019. The top reason for this job role to be ranked the highest is due to the salary compensation people were being awarded, a range of $130,000. The study also predicts that being a data scientist, there are high chances or earning a promotion giving a career advancement score of 9 out of 10.

Precisely, data science is still a fad job and will not cease until the foreseeable future.

Bringing intelligence to where data lives: Python & R embedded in T-SQL

Introduction

Did you know that you can write R and Python code within your T-SQL statements? Machine Learning Services in SQL Server eliminates the need for data movement. Instead of transferring large and sensitive data over the network or losing accuracy with sample csv files, you can have your R/Python code execute within your database. Easily deploy your R/Python code with SQL stored procedures making them accessible in your ETL processes or to any application. Train and store machine learning models in your database bringing intelligence to where your data lives.

You can install and run any of the latest open source R/Python packages to build Deep Learning and AI applications on large amounts of data in SQL Server. We also offer leading edge, high-performance algorithms in Microsoft’s RevoScaleR and RevoScalePy APIs. Using these with the latest innovations in the open source world allows you to bring unparalleled selection, performance, and scale to your applications.

If you are excited to try out SQL Server Machine Learning Services, check out the hands on tutorial below. If you do not have Machine Learning Services installed in SQL Server,you will first want to follow the getting started tutorial I published here: 

How-To Tutorial

In this tutorial, I will cover the basics of how to Execute R and Python in T-SQL statements. If you prefer learning through videos, I also published the tutorial on YouTube.

Basics

Open up SQL Server Management Studio and make a connection to your server. Open a new query and paste this basic example: (While I use Python in these samples, you can do everything with R as well)

EXEC sp_execute_external_script @language = N'Python',
@script = N'print(3+4)'

Sp_execute_external_script is a special system stored procedure that enables R and Python execution in SQL Server. There is a “language” parameter that allows us to choose between Python and R. There is a “script” parameter where we can paste R or Python code. If you do not see an output print 7, go back and review the setup steps in this article.

Parameter Introduction

Now that we discussed a basic example, let’s start adding more pieces:

EXEC sp_execute_external_script  @language =N'Python', 
@script = N' 
OutputDataSet = InputDataSet;
',
@input_data_1 =N'SELECT 1 AS Col1';

Machine Learning Services provides more natural communications between SQL and R/Python with an input data parameter that accepts any SQL query. The input parameter name is called “input_data_1”.
You can see in the python code that there are default variables defined to pass data between Python and SQL. The default variable names are “OutputDataSet” and “InputDataSet” You can change these default names like this example:

EXEC sp_execute_external_script  @language =N'Python', 
@script = N' 
MyOutput = MyInput;
',
@input_data_1_name = N'MyInput',
@input_data_1 =N'SELECT 1 AS foo',
@output_data_1_name =N'MyOutput';

As you executed these examples, you might have noticed that they each return a result with “(No column name)”? You can specify a name for the columns that are returned by adding the WITH RESULT SETS clause to the end of the statement which is a comma separated list of columns and their datatypes.

EXEC sp_execute_external_script  @language =N'Python', 
@script=N' 
MyOutput = MyInput;
',
@input_data_1_name = N'MyInput',
@input_data_1 =N'
SELECT 1 AS foo,
2 AS bar
',
@output_data_1_name =N'MyOutput'
WITH RESULT SETS ((MyColName int, MyColName2 int));

Input/Output Data Types

Alright, let’s discuss a little more about the input/output data types used between SQL and Python. Your input SQL SELECT statement passes a “Dataframe” to python relying on the Python Pandas package. Your output from Python back to SQL also needs to be in a Pandas Dataframe object. If you need to convert scalar values into a dataframe here is an example:

EXEC sp_execute_external_script  @language =N'Python', 
@script=N' 
import pandas as pd
c = 1/2
d = 1*2
s = pd.Series([c,d])
df = pd.DataFrame(s)
OutputDataSet = df
'

Variables c and d are both scalar values, which you can add to a pandas Series if you like, and then convert them to a pandas dataframe. This one shows a little bit more complicated example, go read up on the python pandas package documentation for more details and examples:

EXEC sp_execute_external_script  @language =N'Python', 
@script=N' 
import pandas as pd
s = {"col1": [1, 2], "col2": [3, 4]}
df = pd.DataFrame(s)
OutputDataSet = df
'

You now know the basics to execute Python in T-SQL!

Did you know you can also write your R and Python code in your favorite IDE like RStudio and Jupyter Notebooks and then remotely send the execution of that code to SQL Server? Check out these documentation links to learn more: https://aka.ms/R-RemoteSQLExecution https://aka.ms/PythonRemoteSQLExecution

Check out the SQL Server Machine Learning Services documentation page for more documentation, samples, and solutions. Check out these E2E tutorials on github as well.

Would love to hear from you! Leave a comment below to ask a question, or start a discussion!

The 6 most in-demand AI jobs and how to get them

A press release issued in December 2017 by Gartner, Inc explicitly states, 2020 will be a pivotal year in Artificial Intelligence-related employment dynamics. It states AI will become “a positive job motivator”.

However, the Gartner report also sounds some alarm bells. “The number of jobs affected by AI will vary by industry-through 2019, healthcare, the public sector and education will see continuously growing job demand while manufacturing will be hit the hardest. Starting in 2020, AI-related job creation will cross into positive territory, reaching two million net-new jobs in 2025,” the press release adds.

This phenomenon is expected to strike worldwide, as a report carried by a leading Indian financial daily, The Hindu BusinessLine states. “The year 2018 will see a sharp increase in demand for professionals with skills in emerging technologies such as Artificial Intelligence (AI) and machine learning, even as people with capabilities in Big Data and Analytics will continue to be the most sought after by companies across sectors, say sources in the recruitment industry,” this news article says.

Before we proceed, let us understand what exactly does Artificial Intelligence or AI mean.

Understanding Artificial Intelligence

Encyclopedia Britannica explains AI as: “The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with human beings.” Classic examples of AI are computer games that can be played solo on a computer. Of these, one can be a human while the other is the reasoning, analytical and other intellectual property a computer. Chess is one example of such a game. While playing Chess with a computer, AI will analyze your moves. It will predict and reason why you made them and respond accordingly.

Similarly, AI imitates functions of the human brain to a very great extent. Of course, AI can never match the prowess of humans but it can come fairly close.

What this means?

This means that AI technology will advance exponentially. The main objective for developing AI will not aim at reducing dependence on humans that can result in loss of jobs or mass retrenchment of employees. Having a large population of unemployed people is harmful to economy of any country. Secondly, people without money will not be able to utilize most functions that are performed through AI, which will render the technology useless.

The advent and growing popularity of AI can be summarized in words of Bill Gates. According to the founder of Microsoft, AI will have a positive impact on people’s lives. In an interview with Fox Business, he said, people would have more spare time that would eventually lead to happier life. However he cautions, it would be long before AI starts making any significant impact on our daily activities and jobs.

Career in AI

Since AI primarily aims at making human life better, several companies are testing the technology. Global online retailer Amazon is one amongst these. Banks and financial institutions, service providers and several other industries are expected to jump on the AI bandwagon in 2018 and coming years. Hence, this is the right time to aim for a career in AI. Currently, there exists a great demand for AI professionals. Here, we look at the top six employment opportunities in Artificial Intelligence.

Computer Vision Research Engineer

 A Computer Vision Research Engineer’s work includes research and analysis, developing software and tools, and computer vision technologies. The primary role of this job is to ensure customer experience that equals human interaction.

Business Intelligence Engineer

As the job designation implies, the role of a Business Intelligence Engineer is to gather data from multiple functions performed by AI such as marketing and collecting payments. It also involves studying consumer patterns and bridging gaps that AI leaves.

Data Scientist

A posting for Data Scientist on recruitment website Indeed describes Data Scientist in these words: “ A mixture between a statistician, scientist, machine learning expert and engineer: someone who has the passion for building and improving Internet-scale products informed by data. The ideal candidate understands human behavior and knows what to look for in the data.

Research and Development Engineer (AI)

Research & Development Engineers are needed to find ways and means to improve functions performed through Artificial Intelligence. They research voice and text chat conversations conducted by bots or robotic intelligence with real-life persons to ensure there are no glitches. They also develop better solutions to eliminate the gap between human and AI interactions.

Machine Learning Specialist

The job of a Machine Learning Specialist is rather complex. They are required to study patterns such as the large-scale use of data, uploads, common words used in any language and how it can be incorporated into AI functions as well as analyzing and improving existing techniques.

Researchers

Researchers in AI is perhaps the best-paid lot. They are required to research into various aspects of AI in any organization. Their role involves researching usage patterns, AI responses, data analysis, data mining and research, linguistic differences based on demographics and almost every human function that AI is expected to perform.

As with any other field, there are several other designations available in AI. However, these will depend upon your geographic location. The best way to find the demand for any AI job is to look for good recruitment or job posting sites, especially those specific to your region.

In conclusion

Since AI is a technology that is gathering momentum, it will be some years before there is a flood of people who can be hired as fresher or expert in this field. Consequently, the demand for AI professionals is rather high. Median salaries these jobs mentioned above range between US$ 100,000 to US$ 150,000 per year.

However, before leaping into AI, it is advisable to find out what other qualifications are required by employers. As with any job, some companies need AI experts that hold specific engineering degrees combined with additional qualifications in IT and a certificate that states you hold the required AI training. Despite, this is the best time to make a career in the AI sector.