Sunday, November 15, 2015

Glimpsing IBM Watson's High Tech Analytics In Silicon Valley

Silicon Valley types want me hanging out at their business events. One such event last week brought me down to one of the Valley's private venues for an IBM Watson presentation. I'm not the target clientele for this Big Data analytics solution but I had to check things out. There was no suitable on-location backdrop for my badge selfie, so I had to take the photo below at an undisclosed location.

I signed up to hear their two tracks on procurement intelligence and trade-off analytics after the main pitch. IBM people get the API economy. I heard them pitch their API developer ecosystem at Oracle OpenWorld 2015, and now it's good to see the Watson engine in action. The Alchemy Language API looks like an incredible business intelligence (BI) tool. The "news explorer" live link diagram showing connected news stories would be excellent for PR or marketing people, or for open-source intelligence (OSINT) practitioners.

The main pitch dude's recommended reading list included a book on machine learning, but I couldn't write down the author's name from where I sat. Amazon lists plenty of machine learning best-sellers, so my local library must have one. I did capture Pedro Domingos' The Master Algorithm and Provost/Fawcett's Data Science for Business from his list, unless I copied the titles incorrectly. I have so many books to read already that adding these will push the completion of my business reading list well into 2016. That's what it takes to demonstrate thought leadership, and that's why I get invited to these events.

One IBM guy introduced his "Cognitive Computing Index" describing multiple ways for human operators to educate maturing AI systems. IBM suggests Watson's clients iterate revisions every 90 days for whatever they have the system compute. Iterative approaches to refining BI output are supposed to maximize the BI's monetary value, and seat count users should see this value in their commission revenue.

The trade-off analytics session demonstrated Watson's Pareto optimization, graphical outputs, and social media stream matching. The recommended pathway records are a useful audit trail for some data miner to explore. I bet that data mining the faulty pathways will reveal how the top 20% of data scientists in an enterprise are making 80% of the correct decisions. That would be some useful Pareto optimization when performance bonus allocation time comes around.

The procurement intelligence session was all about making purchasing people into knowledge workers. I remember how I did purchasing as a junior supply officer in the US Army back in the late 1990s. I searched the Web for three different vendors and picked the one with the lowest price. It was too easy and probably sub-optimal. The difference today is that Watson is supposed to make research on prices, vendor choices, and spending history a Big Data effort. If AI truly integrates internal and external data feeds as advertised, then it's a bona fide ERP revolution. If users comprehend Watson's word clouds, heat maps, and visualizations, then it's also a knowledge management (KM) solution.

I keep hearing Silicon Valley people talk about how they increasingly prefer workflow ERP solutions over managing legacy files. I told several IBM reps at this event that they will have to integrate workflow data signatures into the internal feeds Watson ingests if they want to stay relevant. It will still be a challenge for developers to build APIs that handle unstructured data, especially if the enterprise has no data warehouse or data lake aggregating external data feeds. The best developers will figure it out. I would figure it out but I'd rather fiddle with financial applications. Watson and other AIs are supposed to be the "easy button" for data transformation once operators are comfortable educating the systems. The AI revolution means everyone becomes an amateur data scientist.