Geschriebene Artikel über Big Data Analytics

Five Illusions about Big Data you can’t help but believe in

Big Data is a smorgasbord of data. Even the marketing world has acknowledged the gravity of Big Data. But alas! Instead of having such a resplendent data power by our side, we are no closer to construct smart marketing decisions than before, when the concept was not well known.

So, something is definitely not right, right? Not all information derived from this industry is precise and to address this issue, I have highlighted five common misconceptions about Big Data. Know it, work on it and gain from it.

 

Misconception 1: Human touch surpasses automation

Entrepreneurs are the ones who pull their weight. The human effort they offer yields potential success for the firm, only if it is backed by meaningful data.
“One of the most common misconceptions is that people believe they will always outperform computers in their decision-making process. That may have been the case in the past, but with the complexity of today’s markets and the advancement of technology, this assumption no longer holds true,” says Victor Rosenman, CEO of Feedvisor, the pioneer of Algo-Commerce. He added, “All business owners are constantly required to make critical decisions, and the most effective decisions are not based on gut feelings, but on facts and data.”

Misconception 2: Data leads to more costs

Money makes a business. It is also the other way round. Using artificial intelligence, small business-owners benefit the most. AI saves time and money both, thus helps in raising the revenues. You need to understand that big data wouldn’t be enjoying the current hot seat status, if it was that expensive to implement. They are low on cost now, even getting lower. Moreover, besides being inexpensive, big data also aid in curbing other costs that the company would have to bear otherwise.

Misconception 3: Data takes the lead in big changes

“The view of cognitive systems as brains that automatically solve any problem is a popular misconception.” – IBM’s Brandon Buckner recently said. Integrated tools are mostly implemented to do stuffs like gauge human expertise and enhance human intelligence. By this, he meant that technologies actually support your business instead of taking the lead. With data, business-owners enjoy better decision-making capabilities, which is propitious for future business endeavours.

Misconception 4: Little data is too little to make any impact

Though big data arrests the glowing eyes, little data seizes the mind.  Little data is a small set of data. We know that people always look for a bulk of information, but at times, quality is not what they seek. Sometimes, little data can do the job, which bulk data fail to do. The information in little data is more restrained, clean and unprecedented.

Misconception 5: Big data for big businesses

No more, you need to shell out ludicrous amounts of money to acquire big data technologies. Non- Fortune 500 companies are also introducing big data in their systems. And the best part is that it is no more confined to a single sector, it is omnipresent in almost every industry.

In 2011 McKinsey Global Institute report called “Big data: The next frontier for innovation, competition, and productivity” revealed: “The use of big data will become a key basis of competition and growth for individual firms.” Now it is 2017, so just think how big Big Data must have grown in size and scope over the past 6 years.

Clarify Goal of the Analysis – Process Mining Rule 1 of 4

This is article no. 1 of the four-part article series Privacy, Security and Ethics in Process Mining.

Read this article in German:
Datenschutz, Sicherheit und Ethik beim Process Mining – Regel 1 von 4

Clarify Goal of the Analysis

The good news is that in most situations Process Mining does not need to evaluate personal information, because it usually focuses on the internal organizational processes rather than, for example, on customer profiles. Furthermore, you are investigating the overall process patterns. For example, a process miner is typically looking for ways to organize the process in a smarter way to avoid unnecessary idle times rather than trying to make people work faster.

However, as soon as you would like to better understand the performance of a particular process, you often need to know more about other case attributes that could explain variations in process behaviours or performance. And people might become worried about where this will lead them.

Therefore, already at the very beginning of the process mining project, you should think about the goal of the analysis. Be clear about how the results will be used. Think about what problem are you trying to solve and what data you need to solve this problem.

Do:

  • Check whether there are legal restrictions regarding the data. For example, in Germany employee-related data cannot be used and typically simply would not be extracted in the first place. If your project relates to analyzing customer data, make sure you understand the restrictions and consider anonymization options (see guideline No. 3).
  • Consider establishing an ethical charter that states the goal of the project, including what will and what will not be done based on the analysis. For example, you can clearly state that the goal is not to evaluate the performance of the employees. Communicate to the people who are responsible for extracting the data what these goals are and ask for their assistance to prepare the data accordingly.

Don’t:

  • Start out with a fuzzy idea and simply extract all the data you can get. Instead, think about what problem are you trying to solve? And what data do you actually need to solve this problem? Your project should focus on business goals that can get the support of the process managers you work with (see guideline No. 4).
  • Make your first project too big. Instead, focus on one process with a clear goal. If you make the scope of your project too big, people might block it or work against you while they do not yet even understand what process mining can do.

Privacy, Security and Ethics in Process Mining – Article Series

When I moved to the Netherlands 12 years ago and started grocery shopping at one of the local supermarket chains, Albert Heijn, I initially resisted getting their Bonus card (a loyalty card for discounts), because I did not want the company to track my purchases. I felt that using this information would help them to manipulate me by arranging or advertising products in a way that would make me buy more than I wanted to. It simply felt wrong.

Read this article in German:
Datenschutz, Sicherheit und Ethik beim Process Mining – Artikelserie

The truth is that no data analysis technique is intrinsically good or bad. It is always in the hands of the people using the technology to make it productive and constructive. For example, while supermarkets could use the information tracked through the loyalty cards of their customers to make sure that we have to take the longest route through the store to get our typical items (passing by as many other products as possible), they can also use this information to make the shopping experience more pleasant, and to offer more products that we like.

Most companies have started to use data analysis techniques to analyze their data in one way or the other. These data analyses can bring enormous opportunities for the companies and for their customers, but with the increased use of data science the question of ethics and responsible use also grows more dominant. Initiatives like the Responsible Data Science seminar series [1] take on this topic by raising awareness and encouraging researchers to develop algorithms that have concepts like fairness, accuracy, confidentiality, and transparency built in (see Wil van der Aalst’s presentation on Responsible Data Science at Process Mining Camp 2016).

Process Mining can provide you with amazing insights about your processes, and fuel your improvement initiatives with inspiration and enthusiasm, if you approach it in the right way. But how can you ensure that you use process mining responsibly? What should you pay attention to when you introduce process mining in your own organization?

In this article series, we provide you four guidelines that you can follow to prepare your process mining analysis in a responsible way:

Part 1 of 4: Clarify the Goal of the Analysis

Part 2 of 4: Responsible Handling of Data

Part 3 of 4: Consider Anonymization

Part 4 of 4: Establish a collaborative Culture

Acknowledgements

We would like to thank Frank van Geffen and Léonard Studer, who initiated the first discussions in the workgroup around responsible process mining in 2015. Furthermore, we would like to thank Moe Wynn, Felix Mannhardt and Wil van der Aalst for their feedback on earlier versions of this article.

Statistical Relational Learning – Part 2

In the first part of this series onAn Introduction to Statistical Relational Learning”, I touched upon the basic Machine Learning paradigms, some background and intuition of the concepts and concluded with how the MLN template looks like. In this blog, we will dive in to get an in depth knowledge on the MLN template; again with the help of sample examples. I would then conclude by highlighting the various toolkit available and some of its differentiating features.

MLN Template – explained

A Markov logic network can be thought of as a group of formulas incorporating first-order logic and also tied with a weight. But what exactly does this weight signify?

Weight Learning

According to the definition, it is the log odds between a world where F is true and a world where F is false,

and captures the marginal distribution of the corresponding predicate.

Each formula can be associated with some weight value, that is a positive or negative real number. The higher the value of weight, the stronger the constraint represented by the formula. In contrast to classical logic, all worlds (i.e., Herbrand Interpretations) are possible with a certain probability [1]. The main idea behind this is that the probability of a world increases as the number of formulas it violates decreases.

Markov logic networks with its probabilistic approach combined to logic posit that a world is less likely if it violates formulas unlike in pure logic where a world is false if it violates even a single formula. Consider the case when a formula with high weight i.e. more significance is violated implying that it is less likely in occurrence.

Another important concept during the first phase of Weight Learning while applying an MLN template is “Grounding”. Grounding means to replace each variable/function in predicate with constants from the domain.

Weight Learning – An Example

Note: All examples are highlighted in the Alchemy MLN format

Let us consider an example where we want to identify the relationship between 2 different types of verb-noun pairs i.e noun subject and direct object.

The input predicateFormula.mln file contains

  1. The predicates nsubj(verb, subject) and dobj(verb, object) and
  2. Formula of nsubj(+ver, +s) and dobj(+ver, +o)

These predicates or rules are to learn all possible SVO combinations i.e. what is the probability of a Subject-Verb-Object combination. The + sign ensures a cross product between the domains and learns all combinations. The training database consists of the nsubj and dobj tuples i.e. relations is the evidence used to learn the weights.

When we run the above command for this set of rules against the training evidence, we learn the weights as here:

Note that the formula is now grounded by all occurrences of nsubj and dobj tuples from the training database or evidence and the weights are attached to it at the start of each such combination.

But it should be noted that there is no network yet and this is just a set of weighted first-order logic formulas. The MLN template we created so far will generate Markov networks from all of our ground formulas. Internally, it is represented as a factor graph.where each ground formula is a factor and all the ground predicates found in the ground formula are linked to the factor.

Inference

The definition goes as follows:

Estimate probability distribution encoded by a graphical model, for a given data (or observation).

Out of the many Inference algorithms, the two major ones are MAP & Marginal Inference. For example, in a MAP Inference we find the most likely state of world given evidence, where y is the query and x is the evidence.

which is in turn equivalent to this formula.

Another is the Marginal Inference which computes the conditional probability of query predicates, given some evidence. Some advanced inference algorithms are Loopy Belief Propagation, Walk-SAT, MC-SAT, etc.

The probability of a world is given by the weighted sum of all true groundings of a formula i under an exponential function, divided by the partition function Z i.e. equivalent to the sum of the values of all possible assignments. The partition function acts a normalization constant to get the probability values between 0 and 1.

Inference – An Example

Let us draw inference on the the same example as earlier.

After learning the weights we run inference (with or without partial evidence) and query the relations of interest (nsubj here), to get inferred values.

Tool-kits

Let’s look at some of the MLN tool-kits at disposal to do learning and large scale inference. I have tried to make an assorted list of all tools here and tried to highlight some of its main features & problems.

For example, BUGS i.e. Bayesian Logic uses a Swift Compiler but is Not relational! ProbLog has a Python wrapper and is based on Horn clauses but has No Learning feature. These tools were invented in the initial days, much before the present day MLN looks like.

ProbCog developed at Technical University of Munich (TUM) & the AI Lab at Bremen covers not just MLN but also Bayesian Logic Networks (BLNs), Bayesian Networks & ProLog. In fact, it is now GUI based. Thebeast gives a shell to analyze & inspect model feature weights & missing features.

Alchemy from University of Washington (UoW) was the 1st First Order (FO) probabilistic logic toolkit. RockIt from University of Mannheim has an online & rest based interface and uses only Conjunctive Normal Forms (CNF) i.e. And-Or format in its formulas.

Tuffy scales this up by using a Relational Database Management System (RDBMS) whereas Felix allows Large Scale inference! Elementary makes use of secondary storage and Deep Dive is the current state of the art. All of these tools are part of the HAZY project group at Stanford University.

Lastly, LoMRF i.e. Logical Markov Random Field (MRF) is Scala based and has a feature to analyse different hypothesis by comparing the difference in .mln files!

 

Hope you enjoyed the read. The content starts from basic concepts and ends up highlighting key tools. In the final part of this 3 part blog series I would explain an application scenario and highlight the active research and industry players. Any feedback as a comment below or through a message is more than welcome!

Back to Part I – Statistical Relational Learning

Additional Links:

[1] Knowledge base files in Logical Markov Random Fields (LoMRF)

[2] (still) nothing clever Posts categorized “Machine Learning” – Markov Logic Networks

[3] A gentle introduction to statistical relational learning: maths, code, and examples

A review of Language Understanding tools – IBM Conversation

In the first part of this series, we saw how top firms with their different assistants are vying to acquire a space in the dialogue market. In this second and final part of this blog-series on Conversational AI, I go more technical to discuss the fundamentals of the underlying concept behind building a Dialogue system i.e. the cornerstone of any Language Understanding tool. Moreover, I explain this by reviewing one such Language Understanding tool as an example that is available in the IBM Bluemix suite, called as IBM Conversation.

IBM Conversation within Bluemix

IBM Conversation was built on the lines of IBM Watson from the IBM Bluemix suite. It is now the for dialogue construction after IBM Dialog was deprecated.We start off by searching and then creating a dedicated environment in the console.

ibm-bluemix-screenshot

Setting up IBM Conversation from the Bluemix Catalog/Console

Basics

Conversation component in IBM Bluemix  is based on the Intent, Entity and Dialogue architecture. And the same is the case with Microsoft LUIS (LUIS stands for Language Understanding Intelligent Service). One of the key components involves doing what is termed as Natural Language Understanding or NLU for short. It extracts words from a textual sentence to understand the grammar dependencies to construct high level semantic information that identifies the underlying intent and entity in the given utterance. It returns a confidence measure i.e. the top-most extracted intent out of the many pre-specified intents that gives us the most likely intent from the given utterance as per our trained model.

These are all statistically/machine learned based on the training data. Go over the demo, tutorial and documentation to get a more in-depth view of things at IBM Conversation.

The intent, entity and dialogue based architecture forms the crux of any SLU system to extract semantic information from speech and enables such a system to be generic across the various Language Understanding toolkits.

alexa-interaction-model-ask-screenshot

The Alexa Interaction model based on intent and slots in ASK

Another huge advantage that ASK provides for building such an architecture, is that it has multi-lingual support.

Conceptual Mapping

Intents can be thought of as classes where one classifies the input examples into one of them. For example,

Call Mark is mapped to the MOBILE class and Navigate to Munich is mapped to the ROUTE class

The entities are labels, so e.g. from above, you can have

Mark as a PERSON and Munich as a CITY.

Major advantage and drawback

Both Conversation and LUIS use a non-Machine Learning based approach for software developers or business users to create a fast prototype. It is definitely easy to begin with and gives a lot of options to create drag and drop based dialogue system. However, it can’t scale up to large data. A hybrid approach that can combine or build a dynamic system on top of this static approach is needed for scalable industry solutions.

Extensions

Moreover, an end to end workflow can be built by plugging in components from Node-RED and introduction to the same can be viewed in the below video.

What’s good is that they have a component for Conversation as well. So, we can build a complete chatbot starting from a speech to text component to get the human commands translated to text, followed by a conversation component to build up the dialog and lastly by a text to speech component to translate this textual dialogue back to speech to be spoken by a humanoid or a mobile device!

Missing components and possible future work

It is not possible to add entities/intent dynamically through the UI after the initial workspace is constructed. The advanced response tab doesn’t allow to edit (add) the entities in the response field, like for example adding variables to the context. We can edit it (highlighted in orange) but it doesn’t save or get reflected.

{
“output”: {
“text”: “I understand you want me to turn on something. You can say turn on the wipers or switch on the lights.”
},
“context”: {
“toppings”: “<? context.toppings.append( ‘onions’ ) ?>”
},
“entities”: {
   “appliance”: “<? entities.appliance.append( ‘mobile’ ) ?>”
}
}

Moreover, the link which only mentions accessing intents and entities but not modifying them.

watson-developer-cloud-screenshot watson-developer-cloud-screenshot2

The only place to add the intent, entities is back in the work space and not programmatically at run time. Perhaps, a possible solution can be to use UI with DB data to save the intermediate and newly discovered intent/entity values and then update the workspace later.

As I end this blog, perhaps there would be another AI assistant released that has moved beyond its embryonic stage to conquer real life application scenarios. Conversational AI is hot property, so dive in to reap its benefits, both from an end user and developer’s perspective!

Note: Hope you enjoyed the read. I have deliberately kept the content a mix of non technical and technical to build the excitement and buzz going around this exciting field of conversational AI! Publishing this blog was on my list as I was compiling lot of facts since last few weeks but I had to hurry even more, given the recent news surrounding this upsurge. As always, any feedback as a comment below or through a message are more than welcome!

A “Dialogue” on the recent advances in Conversational Artificial Intelligence (AI)

How important is it to interact, converse and emote in a world that is getting closed and parochial? Conversational Artificial Intelligence (AI) offers a leeway to build agents that have the capability to learn and respond like humans and thereby align in bringing the long term goal of General AI to fruition.

Conversation with artificial assistants, be it Microsoft’s Cortana, Apple’s Siri, Google Now or Amazon’s Alexa is gaining prominence in the last few years. So lay back, relax and enjoy the simple conversational interface at offer, as I take you through a short tour!

In this 2 part blog-series, I cover the latest developments in the field of Dialogue and conversational Artificial Intelligence (AI). I give a brief overview of the current developments from this field, the many Language Understanding tools in the market and in particular, review one of them – IBM Conversation.

It’s a rat race – So act and don’t over think!

After the horrors of Tay tweets -Microsoft’s conversational AI tweet bot that was eventually rolled back due to its racist and sexist tweets early this year, AI enthusiasts have had some good news over the last few months.

nycitizen07-tweet

Microsoft hurried the launch of Tay tweets, its conversational AI bot only to shun it completely.

The Amazon Echo, Google’s Home and the smart home hub Apple has been preparing are good examples of how big companies are fighting tooth and nail to secure a place on your smart space. Here’s what Francis Chollet, researcher at Google and author of the popular framework – Keras has to say,

Whatever idea you started working on last week, a few other teams have probably been working on it for a month and are about to publish.
— François Chollet (@fchollet) October 5, 2016

Alexa Prize Competition

Just 4 weeks back, Amazon announced the Alexa Prize, an annual competition for university students dedicated to accelerating the field of conversational AI. This inaugural competition focuses on creating a social bot, using the Alexa Skills Kit (ASK) to converse coherently and engaging with humans on popular topics and news events. This gives student developer teams to explore a plethora of advanced topics in the realm of AI that include knowledge acquisition, natural language understanding, natural language generation, context modeling, commonsense reasoning and dialog planning. With a huge cash prize at stake, goodies at offer and support from the ASK team it would be worth an experience to build a socially coherent bot!  The last date of team submissions is October 28, 2016 and more details about the application process can be found here.

Say Allo!

Google Allo, a smart messaging app that has personalized recommendations with the Google Assistant to express yourself better with stickers, doodles, and HUGE emojis & text. Allo also allows you to get help from your Google Assistant without leaving the conversation. A one to one conversation can be initiated with your Assistant which gets better as you use it more by addressing it with the @google tag. More functional details on the blog Say hello to Google Allo: a smarter messaging app

IBM Pepper developer Conference

The IBM BusinessConnect 2016 on 4th October 2016 in Stockholm, Sweden showcased some of IBM Watson powered tools, and applications in humanoid robot of Pepper.

Yesterdays #IBMBCSE at Stockholm Waterfront was fantastic thanks to all IBMers, partners and customers, and thanks to #Pepper of course! pic.twitter.com/quZuaptu8Z
— IBM ClientCtr Nordic (@IBMCCNordic) October 5, 2016

IBM’s Pepper is powered by SoftBank robot and uses IBM Watson technology at its core.

Banzai! (Live long) – Watch this first home robot commercial as the unforeseen future is coming!

The Watson Developer Conference is packed with technical talks, hands-on labs, and coding challenges to get you working with the tools that will make you a sought after developer and is going to be held in San Francisco from 9th to 10th November this year.

ibm-robot

The IBM Global Industry Solution is located in Nice, France.

Joie de vivre – Samsung buys Viv

And after Google’s Allo and IBM’s pepper it was Samsung to jump into the Dialogue based conversational AI bandwagon as it acquired Viv, creators of Apple’s Siri. Viv is a more powerful version to Siri that brings in ubiquity. With its self-generating software that is capable of writing its own code to accomplish new tasks and by dynamic program generation, Viv handles new user tasks and build plans on the fly!

In its demo video on “Beyond Siri: The World Premiere of Viv with Dag Kittlaus” (as in the embedded link/video below) earlier this year, Viv was eventually be partnered or sold to a mobile device.

With everyone wanting to invest heavily, the question was who and when! Hence, this announcement from Samsung doesn’t come as a big surprise.

Viv will ultimately provide services to Samsung and its platforms but remain an independent entity. Samsung hopes to disrupt the mobile market share with this acquisition. It can extend it to other home devices, after all it had purchased SmartThings for around $200M back in 2014. More details on the acquisition here: Samsung acquires Viv, a next-gen AI assistant built by the creators of Apple’s Siri

Don’t take it slow because there is Ozlo !

Ozlo launched few days back on iOS and the web is another of the many sprouting AI assistants which uses good memory of one’s previous interactions. Ozlo, at least by its name attempts to be different than all assistants of its competitors in the market at present that use repetitive female names. The best thing is that it is integrated with a plethora of services like Yelp, TripAdvisor,IMDB, among many others and use  Further Food, Authority Nutrition, Cookies, etc. to provide nutritional guidance. This is a huge boost than all of its rival companies which tend to prioritize their own services rather than integrating with existing services. An in-depth review can be found here: Ozlo AI assistant is the new underdog filling the void left by Viv

And there were rumors that Apple is going to buy McLaren, which set the eyeballs rolling as a big tech giant was entering a completely new domain of automobile industry and would lead others like Google, Microsoft and IBM to follow suit and invest heavily!

Conference workshops also wanting a dialogue!

There are in total 50 workshops at NIPS 2016 this year covering a range of different Machine Learning topics.

  1. The Dialog workshop, scheduled on the 10th of December focuses on building agents capable of mutually coordinating with humans via communication. And given the tremendous economic potential of the ability to converse intimately transcends to the overall goal of AI.
    For the call for papers, the deadline is extended to the midnight of October 23, 2016 and more details about the workshop schedule can be found at the chair website LET’S DISCUSS: LEARNING METHODS FOR DIALOGUE NIPS 2016 WORKSHOP The papers are on the below three high-level areas

    • Being data-driven especially the offline/online evaluation
    • Build complete applications or end-to-end systems
    • Model innovation to incorporate linguistic knowledge into the architecture
  1. Another workshop on Interactive machine learning (IML) is to be held on the 9th of December. It focuses on the adaptable collaboration of how autonomous agents solve a task by making use of interactions with humans. Designing and engineering fully autonomous agents is a difficult and there is a compelling need for IML algorithms that enable artificial and human agents to collaborate and solve independent or shared goals.
    The call for papers explores new ideas in interactive learning, reports on research in progress as well as discussions of open problems and challenges facing interactive machine learning with particular interest in the research on the practical application of interactive learning systems (for robotics, virtual agents, dialog systems, among others), and the ability of these systems to handle the complexity of real world problems. More details about the application process, requirements, application deadline, etc. is at the workshop portal Future of Interactive Learning Machines Workshop (FILM at NIPS 2016)

In the next part of this series on Conversational AI, I would cover the basics behind Language Understanding tools in the market that enable to build a Dialogue system.

Read the second Part here: A review of Language Understanding tools – IBM Conversation

Statistical Relational Learning

An Introduction to Statistical Relational Learning – Part 1

Statistical Relational Learning (SRL) is an emerging field and one that is taking centre stage in the Data Science age. Big Data has been one of the primary reasons for the continued prominence of this relational learning approach given, the voluminous amount of data available now to learn interesting and unknown patterns from data. Moreover, the tools have also improved their processing prowess especially, in terms of scalability.

This introductory blog is a prelude on SRL and later on I would also touch base on more advanced topics, specifically Markov Logic Networks (MLN). To start off, let’s look at how SRL fits into one of the 5 different Machine Learning paradigms.

Five Machine Learning Paradigms

Lets look at the 5 Machine Learning Paradigms: Each of which is inspired by ideas from a different field!

  1. Connectionists as they are called and led by Geoffrey Hinton (University of Toronto & Google and one of the major names in the Deep Learning community) think that a learning algorithm should mimic the brain! After all it is the brain that does all the complex actions for us and, this idea stems from Neuroscience.
  2. Another group of Evolutionists whose leader is the late John Holland (from the University of Michigan) believed it is not the brain but evolution that was precedent and hence the master algorithm to build anything. And using this approach of having the fittest ones program the future they are currently building 3D prints of future robots.
  3. Another thought stems from Philosophy where Analogists like Douglas R. Hofstadter an American writer and author of popular and award winning book – Gödel, Escher, Bach: an Eternal Golden Braid believe that Analogy is the core of Cognition.
  4. Symbolists like Stephen Muggleton (Imperial College London) think Psychology is the base and by developing Rules in deductive reasoning they built Adam – a robot scientist at the University of Manchester!
  5. Lastly we have a school of thought which has its foundations rested on Statistics & Logic, which is the focal point of interest in this blog. This emerging field has started to gain prominence with the invention of Bayesian networks 2011 by Judea Pearl (University of California Los Angeles – UCLA) who was awarded with the Turing award (the highest award in Computer Science). Bayesians as they are called, are the most fanatical of the lot as they think everything can be represented by the Bayes theorem using hypothesis which can be updated based on new evidence.

SRL fits into the last paradigm of Statistics and Logic. As such it offers another alternative to the now booming Deep Learning approach inspired from Neuroscience.

Background

In many real world scenario and use cases, often the underlying data is assumed to be independent and identically distributed (i.i.d.). However, real world data is not and instead consists of many relationships. SRL as such attempts to represent, model, and learn in the relational domain!

There are 4 main Models in SRL

  1. Probabilistic Relational Models (PRM)
  2. Markov Logic Networks (MLN)
  3. Relational Dependency Networks (RDN)
  4. Bayesian Logic Programs (BLP)

It is difficult to cover all major models and hence the focus of this blog is only on the emerging field of Markov Logic Networks.
MLN is a powerful framework that combines statistics (i.e. it uses Markov Random Fields) and logical reasoning (first order logic).

 

markov-random-fields-first-order-logic

Academia

Some of the prominent names in academic and the research community in MLN include:

  1. Professor Pedro Domingos from the University of Washington is credited with introducing MLN in his paper from 2006. His group created the tool called Alchemy which was one of the first, First Order Logic tools.
  2. Another famous name – Professor Luc De Raedt from the AI group at University of Leuven in Belgium, and their team created the tool ProbLog which also has a Python Wrapper.
  3. HAZY Project (Stanford University) led by Prof. Christopher Ré from the InfoLab is doing active research in this field and Tuffy, Felix, Elementary, Deep Dive are some of the tools developed by them. More on it later!
  4. Talking about academia close by i.e. in Germany, Prof. Michael Beetz and his entire team moved from TUM to TU Bremen. Their group invented the tool – ProbCog
  5. At present, Prof. Volker Tresp from Ludwig Maximilians University (LMU), Munich & Dr. Matthias Nickles at Technical University of Munich (TUM) have research interests in SRL.

Theory & Formulation

A look at some background and theoretical concepts to understand MLN better.

A. Basics – Probabilistic Graphical Models (PGM)

The definition of a PGM goes as such:

A PGM encodes a joint p(x,y) or conditional p(y|x) probability distribution such that given some observations we are provided with a full probability distribution over all feasible solutions.

A PGM helps to encode relationships between a set of random variables. And it achieves this by making use of a graph! These graphs can be either be Directed or Undirected Graphs.

B. Markov Blanket

A Markov Blanket is a Directed Acyclic graph. It is a Bayesian network and as you can see the central node A highlighted in red is dependent on its parents and parents of descendents (moralization) by the circle drawn around it. Thus these nodes are the only knowledge needed to predict node A.

C. Markov Random Fields (MRF)

A MRF is an Undirected graphical model. Every node in an MRF satisfies the Local Markov property of Conditional Independence, i.e. a node is conditionally independent of another node, given its neighbours. And now relating it to Markov Blanket as explained previously, a Markov blanket for a node is simply its adjacent nodes!

Intuition

We now that Probability handles uncertainty whereas Logic handles complexity. So why not make use of both of them to model relationships in data that is both uncertain and complex. Markov Logic Networks (MLN) precisely does that for us!

MLN is composed of a set of pairs of  <w, F> where F is the formula (written in FO logic) and weights (real numbers identifying the strength of the constraint).

MLN basically provides a template to ground a Markov network. Grounding would be explained in detail in the next but one section on “Weight Learning”.

It can be defined as a Log linear model where probability of a world is given by the weighted sum of all true groundings of a formula i under an exponential function. It is then divided by Z which is termed as the partition function and used to normalize and get probability values between 0 and 1.

propability_of_a_world_x

The MLN Template

Rules or Predicates

The relation to be learned is expressed in FO logic. Some of the different possible FO logical connectives and quantifiers are And (^), Or (V), Implication (→), and many more. Plus, Formulas may contain one or more predicates, connected to each other with logical connectives and quantified symbols.

Evidence

Evidence represent known facts i.e. the ground predicates. Each fact is expressed with predicates that contain only constants from their corresponding domains.

Weight Learning

Discover the importance of relations based on grounded evidence.

Inference

Query relations, given partial evidence to infer a probabilistic estimate of the world.

More on Weight Learning and Inference in the next part of this series!

Hope you enjoyed the read. I have deliberately kept the content basic and a mix of non technical and technical so as to highlight first the key players and some background concepts and generate the reader’s interest in this topic, the technicalities of which can easily be read in the paper. Any feedback as a comment below or through a message are more than welcome!

Continue reading with Statistical Relational Learning – Part II.

References

Data Science on a large scale – can it be done?

Analytics drives business

In today’s digital world, data has become the crucial success factor for businesses as they seek to maintain a competitive advantage, and there are numerous examples of how companies have found smart ways of monetizing data and deriving value accordingly.

On the one hand, many companies use data analytics to streamline production lines, optimize marketing channels, minimize logistics costs and improve customer retention rates.  These use cases are often described under the umbrella term of operational BI, where decisions are based on data to improve a company’s internal operations, whether that be a company in the manufacturing industry or an e-commerce platform.

On the other hand, over the last few years, a whole range of new service-oriented companies have popped up whose revenue models wholly depend on data analytics.  These Data-Driven Businesses have contributed largely to the ongoing development of new technologies that make it possible to process and analyze large amounts of data to find the right insights.  The better these technologies are leveraged, the better their value-add and the better for their business success.  Indeed, without data and data analytics, they don’t have a business.

Data Science – hype or has it always been around?Druck

In my opinion, there is too much buzz around the new era of data scientists.  Ten years ago, people simply called it data mining, describing similar skills and methods.  What has actually changed is the fact that businesses are now confronted with new types of data sources such as mobile devices and data-driven applications rather than statistical methodologies.  I described that idea in detail in my recent post Let’s replace the Vs of Big Data with a single D.

But, of course, you cannot deny that the importance of these data crunchers has increased significantly. The art of mining data mountains (or perhaps I should say “diving through data lakes”) to find appropriate insights and models and then find the right answers to urgent, business-critical questions has become very popular these days.

The challenge: Data Science with large volumes?

Michael Stonebraker, winner of the Turing Award 2014, has been quoted as saying: “The change will come when business analysts who work with SQL on large amounts of data give way to data EXASOL Pipelinescientists, which will involve more sophisticated analysis, predictive modeling, regressions and Bayesian classification. That stuff at scale doesn’t work well on anyone’s engine right now. If you want to do complex analytics on big data, you have a big problem right now.”

And if you look at the limitations of existing statistical environments out there using R, Python, Java, Julia and other languages, I think he is absolutely right.  Once the data scientists have to handle larger volumes, the tools are just not powerful and scalable enough.  This results in data sampling or aggregation to make statistical algorithms applicable at all.

A new architecture for “Big Data Science”

We at EXASOL have worked hard to develop a smart solution to respond to this challenge.  Imagine that it is possible to use raw data and intelligent statistical models on very large data sets, directly at the place where the data is stored.  Where the data is processed in-memory to achieve optimal performance, all distributed across a powerful MPP cluster of servers, in an environment where you can now “install” the programming language of your choice.

Sounds far-fetched?  If you are not convinced, then I highly recommend you have a look at our brand-new in-database analytic programming platform, which is deeply integrated in our parallel in-memory engine and extensible through using nearly any programming language and statistical library.

For further information on our approach to big data science, go ahead and download a copy of our technical whitepaper:  Big Data Science – The future of analytics.

Neural Nets: Time Series Prediction

Artificial neural networks are very strong universal approximators. Google recently defeated the worlds strongest Go (“chinese chess”) player with two neural nets, which captured the game board as a picture. Aside from these classification tasks, neural nets can be used to predict future values, behaviors or patterns solely based on learned history. In the machine learning literature, this is often referred to as time series prediction, because, you know, values over time need to be predicted. Hah! To illustrate the concept, we will train a neural net to learn the shape of a sinusoidal wave, so it can continue to draw the shape without any help. We will do this with Scala. Scala is a great lang, because it is strongly typed but feels easy like Python. Throughout this article, I will use the library NeuroFlow, which is a simple, lightweight library I wrote to build and train nets. Because Open Source is the way to go, feel free to check (and contribute to? :-)) the code on GitHub.

Introduction of the shape

If we, as humans, want to predict the future based on historic observations, we would have no other chance but to be guided by the shape drawn so far. Let’s study the plot below, asking ourselves: How would a human continue the plot?

sinuspredictdr
f(x) = sin(10*x)

Intuitively, we would keep on oscillating up and down, just like the grey dotted line tries to rough out. To us, the continuation of the shape is reasonably easy to understand, but a machine does not have a gut feeling to ask for a good guess. However, we can summon a Frankenstein, which will be able to learn and continue the shape based on numbers. In order to do so, let’s have a look at the raw, discrete data of our sinusoidal wave:

x f(x)
0.0 0.0
0.05 0.479425538604203
0.10 0.8414709848078965
0.15 0.9974949866040544
0.20 0.9092974268256817
0.25 0.5984721441039564
0.30 0.1411200080598672
0.35 -0.35078322768961984
0.75 0.9379999767747389

Ranging from 0.0 until 0.75, these discrete values drawn from our function with step size 0.05 will be the basis for training. Now, one could come up with the idea to just memorize all values, so a sufficiently reasonable value can be picked based on comparison. For instance, to continue at the point 0.75 in our plot, we could simply examine the area close to 0.15, noticing a similar value close to 1, and hence go downwards. Well, of course this is cheating, but if a good cheat is a superior solution, why not cheat? Being hackers, we wouldn’t care. What’s really limiting here is the fact that the whole data set needs to be kept in memory, which can be infeasible for large sets, plus for more complex shapes, this approach would quickly result in a lot of weird rules and exceptions to be made in order to find comprehensible predictions.

Net to the rescue

Let’s go back to our table and see if a neural net can learn the shape, instead of simply memorizing it. Here, we want our net architecture to be of kind [3, 5, 3, 1]. Three input neurons, two hidden layers with five and three neurons respectively, as well as one neuron for the output layer will capture the data shown in the table.

sinuspredictnet

A supervised training mode means, that we want to train our net with three discrete steps as input and the fourth step as the supervised training element. So we will train a, b, c -> d and e, f, g -> h et cetera, hoping that this way our net will capture the slope pattern of our sinusoidal wave. Let’s code this in Scala:

import neuroflow.core.Activator.Tanh 
import neuroflow.core.WeightProvider.randomWeights 
import neuroflow.nets.DynamicNetwork.constructor

First, we want a Tanh activation function, because the domain of our sinusoidal wave is [-1, 1], just like the hyperbolic tangent. This way we can be sure that we are not comparing apples with oranges. Further, we want a dynamic network (adaptive learning rate) and random initial weights. Let’s put this down:

val fn = Tanh.apply
val sets = Settings(true, 10.0, 0.0000001, 500, None, None, Some(Map("τ" -> 0.25, "c" -> 0.25)))
val net = Network(Input(3) :: Hidden(5, fn) :: Hidden(3, fn) :: Output(1, fn) :: Nil, sets)

No surprises here. After some experiments, we can pick values for the settings instance, which will promise good convergence during training. Now, let’s prepare our discrete steps drawn from the sinus function:

val group = 4
val sinusoidal = Range.Double(0.0, 0.8, 0.05).grouped(group).toList.map(i => i.map(k => (k, Math.sin(10 * k))))
val xsys = sinusoidal.map(s => (s.dropRight(1).map(_._2), s.takeRight(1).map(_._2)))
val xs = xsys.map(_._1)
val ys = xsys.map(_._2)
net.train(xs, ys)

We will draw samples from the range with step size 0.05. After this, we will construct our training values xs as well as our supervised output values ys. Here, a group consists of 4 steps, with 3 steps as input and the last step as the supervised value.

[INFO] [25.01.2016 14:07:51:677] [run-main-5] Taking step 499 - error: 1.4395661497489177E-4  , error per sample: 3.598915374372294E-5
[INFO] [25.01.2016 14:07:51:681] [run-main-5] Took 500 iterations of 500 with error 1.4304189739640242E-4  
[success] Total time: 4 s, completed 25.01.2016 14:20:56

After a pretty short time, we will see good news. Now, how can we check if our net can successfully predict the sinusoidal wave? We can’t simply call our net like a sinus function to map from one input value to one output value, e. g. something like net(0.75) == sin(0.75). Our net does not care about any x values, because it was trained purely based on the function values f(x), or the slope pattern in general. We need to feed our net with a three-dimensional input vector holding the first three, original function values to predict the fourth step, then drop the first original step and append the recently predicted step to predict the fifth step, et cetera. In other words, we need to traverse the net. Let’s code this:

val initial = Range.Double(0.0, 0.15, 0.05).zipWithIndex.map(p => (p._1, xs.head(p._2)))
val result = predict(net, xs.head, 0.15, initial)
result.foreach(r => println(s"${r._1}, ${r._2}"))

with

@tailrec def predict(net: Network, last: Seq[Double], i: Double, results: Seq[(Double, Double)]): Seq[(Double, Double)] = {
  if (i < 4.0) {
    val score = net.evaluate(last).head
    predict(net, last.drop(1) :+ score, i + 0.05, results :+ (i, score))
  } else results
}

So, basically we don’t just continue to draw the sinusoidal shape at the point 0.75, we draw the entire shape right from the start until 4.0 – solely based on our trained net! Now, let’s see how our Frankenstein will complete the sinusoidal shape from 0.75 on:

sinuspredictfintwo

I’d say, pretty neat? Keep in mind, here, the discrete predictions are connected through splines. Another interesting property of our trained net is its prediction compared to the original sinus function when taking the limit towards 4.0. Let’s plot both:

sinuspredictfin

The purple line is the original sinusoidal wave, whereas the green line is the prediction of our net. The first steps show great consistency, but slowly the curves diverge a little over time, as uncertainties will add up. To keep this divergence rather low, one could fine tune settings, for instance numeric precision. However, if one is taking the limit towards infinity, a perfect fit is illusory.

Final thoughts

That’s it! We have trained our net to learn and continue the sinusoidal shape. Now, I know that this is a rather academic example, but to train a neural net to learn more complex shapes is straightforward from here.

Thanks for reading!

A quick primer on TensorFlow – Google’s machine learning workhorse

Introducing Google Brains‘ TensorFlow™

This week started with major news for the machine learning and data science community: the Google Brain Team announced the open sourcing of TensorFlow, their numerical library for tensor network computations. This software is actively developed (and used!) within Google and builds on many of Google’s large scale neural network applications such as automatic image labeling and captioning as well as the speech recognition in Google’s apps.

TensorFlow in bullet points

Here are the main features:

  • Supports deep neural networks – and much more machine learning approaches
  • Highly scalable across many machines and huge data sets
  • Runs on desktops, servers, in cloud and even mobile devices
  • Computation can run on CPUs, GPUs or both
  • All this flexibility is covered by a single API making the execution very streamlined
  • Available interfaces: C++ and Python. More will follow (Java, R, Lua, Go…)
  • Comes with many tools helping to build and visualize the data flow networks
  • Includes a powerful gradient based optimizer with auto-differentiation
  • Extensible with C++
  • Usable for commercial applications – released under Apache Software Licence 2.0

Tensor, what? Tensor, why?

„Numerical library for tensor network computations“ maybe doesn’t sound too exciting, but let’s  consider the implications.

Application of tensors and their networks is a relatively new (but fast evolving) approach in machine learning. Tensors, if you recall your algebra classes, are simply n-dimensional data arrays (so a scalar is a 0th order tensor, a vector is 1st order, and a matrix a 2nd order matrix).

A simple practical example of is color image’s RGB layers (essentially three 2D matrices combined into a 3rd order tensor). Or a more business minded example – if your data source generates a table (a 2D array) every hour, you can look at the full data set as a 3rd order tensor – time being the extra dimension.

Tensor networks then represent “data flow graphs”, where the edges are your multi-dimensional data sets and nodes are the mathematical operations on this data.

Example of of a data flow graph with multiple nodes (data operations). Notice how the execution of nodes is asynchronous. This allows incredible scalability across many machines. Image Source.

Looking at your data through the tensor formalism gives you a lot of powerful tools that were already developed for tensor algebra, allowing fast, complex computations.  

Tensor networks are also a natural fit for computations done on graphical processing units (GPUs) as they are built exactly for the purpose of very fast numerical operations on such a data – speeding up your calculations significantly compared to standard CPU execution!

The importance of flexible architecture & scaling

The data flow graph approach has also further advantages. Most notably, you can split the design of your data flows (i.e. data cleaning, processing, transformations, model building etc.) from its execution. You first build up the graph of your data flow and then you send it to for execution: either on the CPUs of your machines (and it can be your laptop just as well as cluster) or GPUs or a combination. This happens through a single interface that hides all the complexities from you.

Since the execution is asynchronous it scales across many machines and can deal with huge amounts of data.

You can count on the Google guys to build tools not only for academic use, but also heavy-duty operations in the industry!

Is this just another deep learning library?

TensorFlow is of course not the first library to embrace the tensor formalism and GPU execution. The nearest comparisons (and competitors) are Theano, Torch and CGT (Caffe to a limited degree).

While there are significant overlaps between the libraries, TensorFlow tries to provide a broader framework. It is not only a deep learning library – the Data Flow Graphs can incorporate any data processing/analysis applications. It also comes with a very powerful gradient based optimizer with automatic calculations of derivatives offering huge flexibility.

Given this broad vision the closest competitor is probably Theano (while Caffe and the existing Theano wrappers have a narrower focus on deep learning). TensorFlow’s distinguishing feature is that by design its focus is on large, scalable architectures with a complete flexibility in the hardware, best suited for industry/operational use, whereas the other libraries have more academic pedigrees.

Initial analyses also indicate that TensorFlow should bring also performance improvements compared to Theano, although no comprehensive benchmarks have yet been published.

As the other packages are out already for a while, they have large, active communities and often additional supporting software (examples are the very useful wrappers around Theano like Lasagne, Keras and Blocks that provider higher level abstractions to its engine).

Of course, with Google’s gravitas, one can expect that TensorFlow’s open source community will grow very fast and the contributors will quickly add a lot of additional features (and find hidden bugs).

Finally, keep in mind, that while Google provided us with this great data processing framework and some of its machine learning capabilities, it is likely that the most powerful machine learning algorithms still remain Google’s proprietary secret.

Nonetheless, TensorFlow is a huge and very welcome contribution to the open source machine learning world!

Where to go next?

You can find Google’s getting started guide here. The TensorFlow white paper is worth a read too. Source code can be found at the Github page. There is also a Vagrant virtual machine with TensorFlow pre-installed available here.