Showing posts with label knowledge management. Show all posts
Showing posts with label knowledge management. Show all posts

Sunday, November 15, 2015

Glimpsing IBM Watson's High Tech Analytics In Silicon Valley

Silicon Valley types want me hanging out at their business events. One such event last week brought me down to one of the Valley's private venues for an IBM Watson presentation. I'm not the target clientele for this Big Data analytics solution but I had to check things out. There was no suitable on-location backdrop for my badge selfie, so I had to take the photo below at an undisclosed location.


I signed up to hear their two tracks on procurement intelligence and trade-off analytics after the main pitch. IBM people get the API economy. I heard them pitch their API developer ecosystem at Oracle OpenWorld 2015, and now it's good to see the Watson engine in action. The Alchemy Language API looks like an incredible business intelligence (BI) tool. The "news explorer" live link diagram showing connected news stories would be excellent for PR or marketing people, or for open-source intelligence (OSINT) practitioners.

The main pitch dude's recommended reading list included a book on machine learning, but I couldn't write down the author's name from where I sat. Amazon lists plenty of machine learning best-sellers, so my local library must have one. I did capture Pedro Domingos' The Master Algorithm and Provost/Fawcett's Data Science for Business from his list, unless I copied the titles incorrectly. I have so many books to read already that adding these will push the completion of my business reading list well into 2016. That's what it takes to demonstrate thought leadership, and that's why I get invited to these events.

One IBM guy introduced his "Cognitive Computing Index" describing multiple ways for human operators to educate maturing AI systems. IBM suggests Watson's clients iterate revisions every 90 days for whatever they have the system compute. Iterative approaches to refining BI output are supposed to maximize the BI's monetary value, and seat count users should see this value in their commission revenue.

The trade-off analytics session demonstrated Watson's Pareto optimization, graphical outputs, and social media stream matching. The recommended pathway records are a useful audit trail for some data miner to explore. I bet that data mining the faulty pathways will reveal how the top 20% of data scientists in an enterprise are making 80% of the correct decisions. That would be some useful Pareto optimization when performance bonus allocation time comes around.

The procurement intelligence session was all about making purchasing people into knowledge workers. I remember how I did purchasing as a junior supply officer in the US Army back in the late 1990s. I searched the Web for three different vendors and picked the one with the lowest price. It was too easy and probably sub-optimal. The difference today is that Watson is supposed to make research on prices, vendor choices, and spending history a Big Data effort. If AI truly integrates internal and external data feeds as advertised, then it's a bona fide ERP revolution. If users comprehend Watson's word clouds, heat maps, and visualizations, then it's also a knowledge management (KM) solution.

I keep hearing Silicon Valley people talk about how they increasingly prefer workflow ERP solutions over managing legacy files. I told several IBM reps at this event that they will have to integrate workflow data signatures into the internal feeds Watson ingests if they want to stay relevant. It will still be a challenge for developers to build APIs that handle unstructured data, especially if the enterprise has no data warehouse or data lake aggregating external data feeds. The best developers will figure it out. I would figure it out but I'd rather fiddle with financial applications. Watson and other AIs are supposed to be the "easy button" for data transformation once operators are comfortable educating the systems. The AI revolution means everyone becomes an amateur data scientist.

Tuesday, April 28, 2015

Monday, March 02, 2015

Alfidi Capital At DeveloperWeek 2015 San Francisco

I attended this year's DeveloperWeek in San Francisco.  The brand new Pier 27 terminal made for excellent views.  The most recent America's Cup made use of some temporary buildings here and it's nice to see The City repurpose the lot into something useful.


The conference started off in a disorganized manner.  The initial Main Stage talk got relocated to Stage 2, or so I was told.  The other stages weren't open on time in the morning.  The opening talk I attended started 20 minutes late.  Sheesh.  I did not have time to stay all day so I had to make the most of the morning.  I guess the trend in tech enterprises towards minimizing meetings means some presenters aren't accustomed to showing up on time in person.

The morning talk on GitHub pull requests convinced me that engineers are susceptible to simple ego strokes like fancy job titles.  Designating someone as "Senior Associate Engineer" makes them feel superior to a mere "Associate Engineer."  I saw this same head game when I worked at an asset management firm whose initials were the same as Baloney Goofball Imbeciles.  The "Senior Associates" at my former employer worked for the same entry-level wage as the new hires even if they had years of experience.  Giving someone a longer title is much cheaper than giving them a raise.

The gist of GitHub is that its suite offers advantages over wikis as a documentation method for engineering processes.  Updating a wiki takes constant curation effort.  I once inherited a US Army unit's wiki that did a poor job of documenting the unit's knowledge management architecture.  I ended up shutting it down and migrating users to a more hierarchical archiving system.  The US Army's new taxonomy for updating its doctrinal publications is actually a very useful governance technique for engineering handbooks.  Base document changes should be much less frequent than minor document changes.  Splitting out the documents into a base series providing broad guidance makes sense if they are not updated as frequently as product-specific technical guides.

I like that GitHub's online process changes contribute to a meeting-averse culture, a good evolution in engineer-based enterprises that value productivity.  Knowledge management systems in larger enterprises should use automated workflows.  Non-engineers seem to have trouble grasping workflows' utility.  GitHub pull requests apparently create an audit trail of changes documenting a process during updates, especially the "why" of an author's update.  That's analogous to clarifying a commander's intent in a military mission planning process.  I need to examine GitHub myself to find more analogies.

The Main Stage talks eventually began where they belonged.  The bottom line in several talks is that many verticals are upgrading their tech stacks and this drives demand for developers in the labor market.  I did not mind the thinly disguised pitches for services like HackerRank because they provide a disruptive benefit to productive enterprises.  I have noticed other such disruptive talent matching services migrate to the finance sector.  Whale Path does for finance professionals what HackerRank does for developers, elevating skill demonstration over a stale resume and obsolete academic credentials.  You're only as good as your last performance.  Having a verifiable track record of accomplishment on HackerRank or Whale Path means job performance has a market value.

Some other talks covered deep learning, a new approach to machine learning.  We're all going to hear more about it as VC money starts pouring into IoT devices that need to operate autonomously.  It was fun to see someone walk through the ease of manipulating images displayed on a Lightbox-enabled website.  No coding knowledge is needed to alter Lightbox parameters.  The clear lesson for non-techies is that object manipulation is an easily mastered skill.  Coding as basic literacy still makes sense but not everyone who touches tech will need to code.

I didn't win any raffles or door prizes at DeveloperWeek, nor did I score dates with the hot tech babes who flirted with me.  I still came away with the perspective I needed on how enterprise tech is changing.  DeveloperWeek matters to people driving tech's migration into ease of use modes that business domain experts can interpret.  Anyone in business who is at least minimally tech literate will be economically viable for years to come.

Wednesday, November 26, 2014

The Haiku of Finance for 11/26/14

App life cycle work
Certify knowledge support
Process meant for growth

Data Supply Chain Needs KCS and ITIL

I have blogged before about the data supply chain and I'm pretty sure I'm still way ahead of other analysts covering this emerging topic.  Data sector professionals need to talk more about defining the life cycle management of apps and their supporting data products.  I'll offer two existing constructs that should help data people move toward a more mature life cycle.

The Consortium for Service Innovation (CSI) publishes the Knowledge-Centered Support (KCS) process.  It governs the ongoing revisions and interventions in the life cycle of supporting knowledge for any process.  The engineering approach to refining knowledge management easily supports a data sector product.  Standards for an iterative programming process make revisions easier when an API needs a version 2.0 update.

The Information Technology Infrastructure Library (ITIL) is an established system for optimizing IT service management.  It complements KCS and it should be especially valuable for data sector pros in the documenting and archiving phases of life cycle management.  Understanding ITIL makes implementing ISO/IEC 20000 easier.  Broad architectures aren't just for large IT departments in big enterprises.  Apps will more frequently aim at capturing Big Data, and an ITIL approach in a small app maker keeps it aligned with the needs of larger enterprises who will shop for data packages.

Data sector people who care about managing highly regarded APIs and SDKs should look seriously as the KCS and ITIL architectures.  Having a "KCS Verified" business is like having a Good Housekeeping seal of approval.  It may not be a product differentiator when app reviewers make their best-of-breed assessments in app stores but taking the process seriously should lead to building better apps.  Using ITIL in a data sector business process keeps the life cycle efficient.  Both KCS and ITIL together will help entrepreneurs turn their app idea into a real business.

Tuesday, November 25, 2014

3 Crucial Skills for US Military Veterans Seeking Corporate Careers

I served in the US Army after my studies at the University of Notre Dame.  Some of my ROTC program classmates stayed on active duty for the long haul, longer than I thought would be sane.  They are now approaching their 20-year service milestones, which means some of them are considering life on the outside.  I have them in mind when I think about the references I used years ago when I started my own transition to civilian life.  The published works available to help military veterans make career transitions could fill a whole library shelf.  Most of that material is general and repetitive.  Hardly any guidance is tailored for someone with a more technical career goal.  Fear not, senior veterans, because Alfidi Capital is here to fill the knowledge gap.  

I have identified three skill sets germane to a large corporate environment.  These skills are portable to any corporation and are particularly useful in very technical fields.  Acquiring them requires mastery of peer-reviewed bodies of knowledge.  These qualifications are vastly more credible with corporate recruiters than any military-specific skills a veteran possesses.

Six Sigma certification is the first skill set that veterans should acquire if they want corporate careers.  Completing a Six Sigma project within the US Department of Defense confers a resume bullet more valuable than experience with real bullets.  The American Society for Quality (ASQ) maintains extensive references on Six Sigma and related topics.  The International Association for Six Sigma Certification (IASSC) lists options for completing the qualifying exams.  Completing the appropriate training and exams is not cheap but is absolutely necessary for official qualification.  

Knowledge management (KM) is the second skill set.  Practitioners become the go-to people when an organization translates the DIKW Pyramid into real operations.  Experts read KMWorld for the latest developments.  The American Productivity and Quality Center (APQC) defines many KM best practices.  The KM business discipline does not yet have a universally recognized body of knowledge and several organizations have emerged with competing certification standards.  I believe that mastering the APQC material through independent study is sufficient at present to claim expertise.  

Operations research (OR) is the final skill set.  The Allied Powers in World War II invented the modern field of OR, and today select US Army officers maintain qualification in the operations research / systems analysis (ORSA) specialty.  The Institute for Operations Research and the Management Sciences (INFORMS) is the US governing body for the OR profession; they have all the resources needed for someone seeking qualification.  

Mastering these skills enables a veteran to compete for corporate jobs that have prerequisites beyond entry-level experience.  Combining them with certification as PMI's Project Management Professional would make a veteran's resume very compelling.  Lacking these hard skills can be a serious handicap.  It is an unfortunate fact of modern life that business skills have diverged far enough from the generalist "soft skills" of military leadership to disqualify many veterans from white collar occupations.  Veterans who wish to avoid confinement to the low-income ghetto of permanent entry-level career paths should master widely accepted business knowledge.  This means hitting the books all over again.  

I recently attended a talk by US Marine Corps combat veteran David Danelo about his book The Return:  A Field Manual for Life After Combat.  The audience at San Francisco's Marines Memorial Club recognized that veterans' passion for a meaningful life should carry over into a civilian career once they leave the military.  Passion hits a brick wall when civilian employers find a veteran's resume devoid of recognizable prerequisites.  Veterans who master the three disciplines above prove they have the passion to carry on as relevant civilians.  

Friday, September 12, 2014

The Haiku of Finance for 09/12/14

Sharing all content
Innovation springs anew
Across enterprise

Alfidi Capital Observes BoxWorks 2014

BoxWorks 2014 was the latest display of how Box builds its ecosystem.  I attended for insights into the secret sauce of how an upstart gets other firms to adopt its collaborative tech.  I have never used Box but I should probably give it a shot.  Find my original genius in bold text, as always.


The kickoff chat between Box CEO Aaron Levie and media mogul Jeffrey Katzenberg was cool.  They must have played some Disney film score when he came out but I didn't recognize it because I don't have kids to raise.  The big lesson was that a strong mission statement, a powerful brand, and great tech make a successful business model.  Okay, Jeff, but what about human talent?  The gene pool of talented writers and animators is only so deep, so the great tech among media giants will have to develop AIs that mimic those human abilities.  Jeff did confirm that the size of the movie going market and the talent pool of animators are limits on the number of movies that studios can produce.  I was stunned to hear Jeff say that the digital volume of a typical DreamWorks movie is so dense that they have to use collaborative software to track the edits.  I look forward to the rich video and user-driven animation that tech is supposed to unleash, but we get what we pay for.  Many of the amateur mashups on YouTube are so derivative and uninspired that they're not worth watching.  Jeff's best lesson from the start of his career is that exceeding expectations in any job or mission assignment lead to winning.  Okay, Jeff, but I tried that in large financial service firms and it only got me fired because no one would tolerate it.  Jeff got lucky and I did not.

Aaron and his top Box people had more announcements to share.  Their new Box.org platform offers content management to non-profits.  That follows the latest trend in Silicon Valley enterprises.  Enterprises want to do well by doing good.  Box has been in mobile file sharing since before smartphones and cloud servers made it easy and cheap.  It's only fair that non-profits now get in on the action.  The big product announcements were Box configurations for individuals using MS Office, cloud multi-users, and an upcoming annotation feature in 2015.  I have seen other purveyors push routinized workflow products and now Box Workflow is coming in 2015 for rule-based routinized operations.  I was quite impressed with the look of these products; the MS Office compatible Box display looked better than SharePoint as a knowledge management solution and the workflow looked like a wise use of BRMS.

The special surprise guest at the keynote was Oscar-winning dude Jared Leto.  I had never heard of the guy.  He brought his Oscar to pass around in the audience, as if a bunch of tech middle managers had something to add to his artistic ability.  Aaron compared the Oscar favorably to Box's Crunchy award and said it must be the height of Jared's career to appear at a software conference.  Jared stayed in character as himself throughout this cameo at BoxWorks, and endorsed Box's ability to share artists' content.  One audience member asked an excellent question about how the cloud can impact older art forms that are not digitized.  Aaron and Jared think it will help older art find new audiences.  I had a mental image of a bunch of artists around the nation collaborating on a sculpture in real time, directing some robotic arm in a studio by uploading diagrams into a Box workflow engine.

I spent some time at the 1st Annual Box Partner Summit that ran concurrently with the first day of the main conference.  The main theme of "commitment" was everywhere in the quotes from senior Box people.  I expect banality at most conferences but I did not know enterprise software managers were deep enough to quote Sartre.  Box leverages its partners' domain knowledge to identify pain points of prospective enterprise clients.  Properly incentivized partners of all sizes are willing to refer enterprise clients to box for SaaS solutions.  Partner rebates are great if they grow earnings first and revenue second.  That may be hard for some growing cloud companies to swallow because they need to impress Wall Street with pro-forma EBITDA if they want to go public.  I wonder how CMOs and CEOs calculate the effectiveness of such incentives.  Good programs should have upsell options with measurable ROIs.  Kudos to Box for positioning its offerings partly as KM solutions for knowledge workers.

I had to explore the finance and legal workshop because I know startups that need to collaborate in those areas.  Permissions management in box sounds a lot like SharePoint, which is really more a policy issue than a tech issue.  I am not clear on how Box is different from SharePoint or Google Drive.  It obviously does document management, file synchronization and sharing, and workflows.  I suspect the Box advantage is its API allowing custom-developed apps that do what competitors cannot do.

The keynote and fireside chat with Jim Collins was phenomenal.  He thinks senior corporate leaders will increasingly come from CIO and IT ranks because enterprise computing has become so important.  I only agree with him if he means the software sector; I'm pretty sure CEOs in manufacturing and energy need to know how to make physical things work.  He organized his talk around ten major questions, which I won't repeat here because the background he presented on each one is in his body of work in Good to Great and the works on his "Tools" page.  His bonus question was inspired by advice Peter Drucker gave him to think about being useful to the world.  Jim's answer was to give moments of kindness and encouragement to others.  Changing others' lives and making people better mattered to Jim.  That may be why the audience gave him a standing ovation.  I have never seen a standing ovation for a guest speaker at a business event.  Wow.

Aaron Levie's chat with Jim explored more of this work.  Jim thinks Information Age management science means different applications of the same principles he studies.  Selecting people matters more and leading a network has a less formal power dynamic.  Jim talked a lot about how enterprises must always adhere to their values.  That's great but I've always known that a company's stated values are less real than the values top executives model in their daily behavior.  Aaron Levie comes across as engaging, irreverent, and intelligent.  If he and his top team exhibit those values when they're not in the public eye then Box is on the right track.

The CIO panel reminded me of the work I avoid by being in finance.  They all thought that business process transformation opens a huge market for unmet needs in enterprise collaboration.  I have never worked on a waterfall chart-driven process but that's okay, because these folks say requirements-driven planning moving from waterfall iterations to collaborative iterations.  I need to see evidence that corporate boards demand more tech-savvy directors who can evaluate enterprise risks.  I think that may be just psychological projection coming from CIOs who aspire to board memberships.  All of the anecdotal reviews I've read on board performance is that they are mostly lap dogs asleep at the wheel, selected specifically because they won't challenge management.

The VC panel on innovations was my next stop.  Professional board membership must be lucrative for VCs and others, but it comes with the caveats I mentioned immediately above.  The VCs mentioned traits like effective communication ability and other successful things that mark good leaders.  They did not mention honesty and personal integrity as desirable executive traits, but they did admit the necessity of firing any key executive who does not embody a firm's core values.  One key insight for tech startups is that lead scoring algorithms now generate significantly higher yields from marketing spending.  I was stunned when someone said tech megacorps (i.e., Google, Facebook) spend $10B on a buyout just to defend their $100B market cap.  That blew my mind.  This amazing insight implies headline deals are driven partly by celebrity billionaires defending their net worth with mergers and acquisitions.  Their parting thought was that investors should favor startups where the cost of capital and other macroeconomic conditions won't affect a business model.

The health care sector still has an innovation curve even after the ACA, according to the next panel.  I do not understand how the ACA's payment reforms will work unless they are intended as a stalking horse for a single-payer system.  I learned that the government pays hospitals to adopt digitized record systems, which is probably more costly than a simple regulatory mandate for any provider who wants to access the Medicare or VA payment networks.  Like I said, I just don't get this kind of reform.  The biggest insurance plan underwriters are able to drive demand for optimal payment networks.  They can identify high quality, low cost providers and eliminate variance from their PPO networks.  Data on these pricing variances in treatments allows buyers to eliminate sub-optimal choices.  The private sector makes that efficiency work because competition for services remains strong in a free market with many choices.  It will be less efficient in a market dominated by government-backed exchanges.  This is one reason why providers can't prove whether they save money by using Medicare's Accountable Care Organizations (ACOs).  This may have been the only panel where none of the panelists mentioned Box, knowledge management, shareable content, or collaboration.  The experts were so contentious because the ACA has politicized health care and skewed the sector's natural tech evolution toward choices that favor top-down intervention.

The retail panel sounded interesting because it's a sector I rarely explore.  The panelists said they want real-time data from all channels on all platforms, and that's a big market opportunity for SaaS.  It was depressing to hear someone admit that most retailers still live in a company-driven data model (internal focus) and not a customer-focused data model optimized for mobile (external focus).  Retail or any sector reacting to real-time data must have a fast DevOps cycle.  Predictive analytics adds value in DevOps by showing where proactive IT fixes should focus.  The weakest link in any retail business model is the high-turnover, low-wage workforce.  The best IT innovations must reduce the cognitive load on those low-skill workers so they make fewer mistakes.  That's why automation will eliminate many fast-food jobs.  Bring on the social CRM and get rid of the minimum-wage workforce before unions can organize them.

My favorite panel was on government innovation, and not just because hot Box.org babe Karen Appleton was the moderator.  The federal government's approach to data management and innovation varies by agency.  The FCC wrestles with over 200 legacy systems while it maps broadband capability for the public.  DOD's high-cost early adoption of tech still makes waves when program expenses draw scrutiny, so other agencies should jump on board that gravy train while it's still on the tracks and ask to share in DOD's tech bounty.  Federal law hobbles innovation by classifying an agency spending money on another agency as a felony if said spending does not benefit the original agency.  Wow, that's depressing.  No wonder every agency has internal counsel.  The FCC panelist was extremely thought-provoking by wondering whether public scrutiny in a democracy combined with social media can harm risk-takers who might otherwise have productive government careers.  He said it's also worth wondering whether authoritarian governments can capitalize on high tech faster than democracies because they are under less scrutiny without checks and balances.  Wow, heady stuff.

I sat front and center for that government panel because I had to ask them my only question of the conference.  I asked the panel for their thoughts on two government programs with the word "innovation" in them:  the NSF's Innovation Corps and the use of the Presidential Innovation Fellows (PIFs) in GSA's 18F incubator.  Aneesh Chopra, the former White House CTO on the panel, loved both programs.  The I-Corps trains people in Steve Blank's methods to commercialize the vast research produced in the Federal Lab Consortium.  The PIFs in 18F have built some interesting tools for other agencies and anticipates the creation of the US Digital Service.  The panel experts had done tons of thinking on driving government innovation; now the world gets to hear my ideas for Uncle Sam to use.  I want to see these innovators push more agencies to advertise their needs on Challenge.gov.  DOD should use it to farm out simpler fighter aircraft concepts that won't violate Augustine's Law #16 on astronomical costs.  It would also be great if FedRAMP used Cloudonomics metrics.  Oh yeah, let's get veteran-owned tech startups into I-Corps' pipeline so they get the inside track on tech transfer to the marketplace.

I caught the tail end of Aaron Levie's chat with Vinod Khosla before everything ended.  Vinod thinks Jim Collins' work is bull-stuff.  Funny!  He advises people not to try to predict the future.  Just go out and build something.  I hope he means "fail fast" and move on.

I was pleased to notice that the Box employees working as show floor guides included some very attractive women in tight jeans and purple Box t-shirts.  I was hoping to see some more of that kind of box but did not get the chance, if you know what I mean.  I also prowled the show floor and asked several booth sponsors whether they thought the iCloud celebrity photo hack was a wake-up call for cloud security.  The collective shoulder shrug I got in response told me that I may have been talking to salespeople instead of DevOps people who fix security holes.

Box did very well with BoxWorks.  I did not have time for the Jimmy Eats World concert but that's okay.  Plenty of youngsters got to have fun.  I'll have more fun at BoxWorks next year.  

Monday, August 11, 2014

Pain Points in the Semiconductor and Solar Sectors

I mentioned in my report from Intersolar / SEMICON 2014 that the semiconductor and solar sectors have some unmapped pain points.  I took it upon myself to identify those pain points in brief CustDev interviews with vendors on the expo floors.  I was not there to sell them anything, so I had more credibility than a vendor.  That's how I obtained the insights I'll share below.

I'll start with the semiconductor ecosystem.  The SEMICON exhibitors were all kinds of trinket makers that large semiconductor manufacturers would find in their supply chains.  The most frequently mentioned pain point for them was production cost.  Equipment makers don't always know what drives the cost of producing a complex product, which in turn determines price.  They don't even know if the problems reside in their designs or their supply chains, even after analysis.  Longer lead times add costs and customers don't always forecast their needs very well.

Other pain points in semiconductors seem to be hostage to production costs.  Addressing marketing channels is complicated by tight resources (making it difficult to address a changing market segment) and the difficulty of getting product release information into the hands of the appropriate technical audiences with purchasing authority.  One frequently mentioned pain point was the gripe that human resources were a recurring irritant.  Hiring the right people was always difficult and many of the company leaders I interviewed said "people cause problems."  They never said what kind of problems; I suspect those problems are really from lack of knowledge about how to solve marketing and production problems.

I have concluded that the relationship between production costs and marketing costs is a strong source of pain for the upstream parts of the semiconductor sector.  The mismatch between market knowledge and the ability to adjust capacity is costing the sector money.  This is an underexplored area that is fertile ground for solutions in codifying manufacturing knowledge, material costs, and business process maps.  I believe that startups can deliver disruption in this market by deploying enterprise Big Data analytics solutions addressing those knowledge gaps.  There is money to be made in solving manufacturing problems.

I'll continue with the solar sector.  Three very different pain points emerged under the broad topics of regulation, project knowledge, and financing.  Regulation poses unexpected problems for installers unfamiliar with local fire codes or national safety regulations.  This is particularly costly for inverters that must manage fault detections and shutdowns.

Project knowledge is a challenge for solar installers.  Project developers often ignore monitoring until the end of a sales cycle, and this is not weighted in system design until after funds are spent.  Each solar project is unique to a specific architecture and geography.  Developers stumble when they ignore the costs of grid access where transmission lines are inadequate.  I had a hard time believing some solar component manufacturers have difficulty sourcing basic materials like steel and silicon, so perhaps I spoke to at least one operator who was incompetent.

One very knowledgeable solar person made my day by addressing the cost of a typical installation.  Soft costs are a big factor in residential PV.  The expected lifetime of durable modules in commercial PV are still too much of an unknown.  These random costs make the perfect segue from project knowledge to financing, because uncertainty in estimating project cost means financing must be flexible.  The ease of obtaining customer financing would sell more solar PV systems.

I have concluded that the challenges facing the solar sector are more diverse than those facing the semiconductor sector.  Entrepreneurs can't solve the regulatory pain point, but utilities may be an untapped source of support in pushing reform.  There may be a market for apps that help installers navigate local, state, and national regulatory mazes.  DOE has made strides in reducing soft costs and making project planning more transparent.  Solving the project knowledge pain point is an open field for startups developing Hadoop-based knowledge sharing architectures.  Finally, the financing pain point has a plethora of solutions at the federal and state level but consumers may not know about all of the tax incentives they can use.  Yield cos and tax equity are financial solutions for commercial projects but offer little relief for single-unit residential installation.  Once again, apps may help the real estate sector arrange financing for home improvements that incorporate solar, wind, and storage installations as home improvements.

I only had time to hit up a few dozen vendors so my impressions are not as robust as what a startup would need for an actionable marketing plan.  Oh yeah, the funniest part of my CustDev exploration was when I asked one SEMICON guy to name his biggest pain point.  He rolled his eyes and said, "People like YOU!"  I knew when to back off but I was not deterred from gathering anecdotes.  I make it my mission to understand how to make money from disruption.  Someday a startup will catch my eye because it can solve the problems I identified here in bold type.  I'll be ready to commit my knowledge as sweat equity.  

Tuesday, July 15, 2014

Investing in Collaboration

I attended "The Art of Collaboration" yesterday at the Commonwealth Club.  Stewart Levine was an intelligent and witty presenter on the importance of collaboration skills in the workplace.  I think the dude should teach at the Learning Annex but he's probably got a full calendar working with Resolution Works and Mobile Business Academy.  I learned enough about his themes to want to read his book Getting to Resolution because I must apply financial metrics to collaboration.

The one point Mr. Levine made that jumped out at me was that organizations now hire people for their emotional intelligence (EI).  I can only assume he had senior management positions in mind where skills in managing people matter more than technical competence.  I am absolutely certain that EI does not matter at all in low skill, entry-level jobs like the ones I held before becoming self-employed.  Some level of experience is still a primary requirement to obtain even the lowest position in a white collar professional career.  No amount of EI can replace entry-level skill but EI doesn't matter when working with low skill idiots.  I really liked Mr. Levine's advice on dealing with sociopaths:  Get away from them.

Collaboration has tons of literature supporting different approaches throughout history.  I am more immediately concerned with determining whether managers can measure collaboration's ROI.  The Wikipedia articles I like to reference feed a non-profit collaborative platform.  Macrowikinomics extends the original Wikinomics model into a for-profit ecosystem.  Understanding this framework is key to determining where to make effective investments in something billed as a "collaboration" enhancement.

Investing in collaboration IMHO poses two challenges.  The first is how to incentivize collaboration with HR policy.  The second is how to track collaboration's results.  Solving those two challenges requires a knowledge management (KM) approach integrating two different families of enterprise-wide metrics.  Incentivizing people to work together means deploying ERP modules that mine email traffic for expert references and plotting those experts' recorded interactions.  Measuring results means scoring the products of group work by market share growth, costs saved, and other bottom-line KPIs.  This gets complicated and KM people will have to speak the IT department's Cloudonomics language.

The ERP cost of monitoring collaboration is an investment.  A Google search of "collaboration ROI" reveals plenty of expert thinking on how such an investment pays off.  This Information Week article from 2011 discovered several studies of how collaboration tools impacted enterprise productivity.  Cisco has developed a serious framework breaking down the ROI of collaboration, referencing Ron Ricci's and Carl Wiese's The Collaboration Imperative.  Carl Wiese also authored Cisco's 2010 white paper, "The Return on Collaboration."  It's all terrific theory, but frankly I've had some experience with one of Cisco's collaboration products called Cisco WebEx.  Solving frequent video and audio disruptions would definitely enhance collaboration.

I do not enjoy collaborating with other humans.  I usually have to decelerate my thinking cycle, speak more slowly, and use simpler concepts than I would if I were working alone.  One major drawback to collaboration is its tendency to produce a suboptimal result when superior performers' work is averaged down to a lowest common denominator acceptable to all.  Modern techniques in data analysis and knowledge management are supposed to alleviate this tendency.  They may work best when a manager with high EI is in charge of getting the most out of people.  Proving a collaboration ROI means connecting the dots between costs of systems deployed to track human interaction and the results of such group work in financial KPIs.  Get to work, KM professionals.  Your product managers need those collaborative tools.  

Monday, May 26, 2014

World Bank Needs Content Marketing

The recent news that nobody reads most of the World Bank's published content made me wonder what the world is missing.  I also wonder how much money the World Bank is wasting on knowledge content that fails to generate traction.  The World Bank's own report on its downloads states that a quarter of its budget for country services goes to knowledge products, and 31% of these are never downloaded.  That's about a 7% drag on the country services annual budget.  Simply cutting the product budget by 31% may yield an immediate ROI if the remainder is allocated to more of the multi-sector reports that are most frequently downloaded.

Generating more external research citations via Google Scholar will help validate the World Bank's mission of informing policy debates.  If 87% of the Bank's work goes uncited, Google's tracking tools can reveal which ones are cited by correlating language, page count, subject matter, and other metrics.  Publishing in PDF should not be a limiting factor.  I have seen plenty of academic material circulate in PDF copy because it successfully finds and audience.

The World Bank's social media and knowledge management people need to talk about content marketing.  Google searches for marketing PDF content in social media reveal plenty of free guides from Marketo, Adobe, and other sources that want marketers to succeed.  Has the World Bank ever cross-published its conference presentations to SlideShare?  They should try it.  It works.  Have they ever completed a market analysis of the demographics that attend their conferences and request reports?  Their depressing analysis of download stats may the first step.

I suspect the World Bank's problem lies in its inability to meet a market need for solutions.  It acts like a bureaucracy that expects its captive customers to walk right in to its Open Knowledge Repository.  The audience won't come if they don't know how the portal's products will benefit them.  Private sector marketers know they must push media to a target market.  The World Bank's content can solve the world's problems if it can push relevant content to an audience that needs it.  

Friday, April 04, 2014

HR Community Owns Training ROI Calculation

I have not thought much about the HR community ever since I completed my undergraduate major in that subject.  I washed my hands of the profession in 1995 after concluding it was not a path to high corporate achievement.  I have a newfound respect for HR right now after discovering that they can contribute to the bottom line.

Check out SHRM's ROI methodology for assessing training.  ASTD also publishes material on the ROI from learning.  The equations for ROI are pretty simple, just like many other concepts in finance.  The hard part is assembling the data measuring use cases to make "before and after" comparisons.  In other words, comparing error rates, message delays, etc. after training a workforce should show cost savings that exceed the cost of that workforce's training.

Assembling the data in pre-enterprise computing days would have required some Cheaper By The Dozen type of efficiency expert timing workers with a stopwatch and collating their misplaced records.  Enterprise computing makes it all so easy today.  Knowledge management reps can track workflows in MS SharePoint or Evernote suites.  Tally up the missing files and misdirected workflows for those post-training comparisons.  Even the training itself can be almost costless with self-directed modules requiring little downtime.

I'm glad I never worked in HR.  Executives still see it as a cost center because most HR people don't think in ROI terms.  Maybe that's why I never belonged in that field.  I changed course with an MBA in finance.  I would rather invest in faceless corporations than live human beings.  Knowing the ROI for human effort takes some of the risk out of dealing with people.

Wednesday, March 05, 2014

Identifying The Manufacturing And Design Bodies Of Knowledge

I attended a PR seminar last night that got me thinking about design details that drive a PR message.  A lot of the panel's comments addressed the expertise that cross-functional teams bring to product design.  They also covered how product features drive user engagement, which will ultimately get the product's story told.  Cross-functional teams have been all the rage for decades but the manufacturing knowledge that drives product design has become a lost art in the US ever since American executives started outsourcing production to developing countries.  Practitioners need to know where knowledge of manufacturing and design can be found.

I asked the panel if a body of knowledge exists that product designers can use.  One expert remarked that innovation has outpaced documentation, and many product development details can escape notice.  That tells me there's a gap in knowledge management where some automated solution for documenting product and process changes can fill an enterprise need.  Another panelist mentioned the free courseware at edX and free design templates at the MIT Media Lab.  Those are great sources for people adding skills to their repertoire.

Professional societies have organized larger bodies of knowledge (BoKs) that pertain specifically to manufacturing and design.  APICS has a BoK for operations management.  The Society of Manufacturing Engineers has a BoK for manufacturing technology, and another BoK for lean certification.  The Usability Body of Knowledge should be very useful to any produce designer working the human-computer interface (HCI) for wearables.  The IEEE software engineering BoK, ASQ quality management BoK, and ASQ reliability engineer BoK are within the reach of anyone willing to study them.  All of those things matter in scaling up hi-tech products.

The secrets to success in product development aren't secrets at all.  They're buried under reams of academic concepts that practitioners have spent decades validating.  Practitioners who master the above BoKs should populate the cross-functional teams that design products.  The crucial factor today in product success is iterating product development in response to CustDev on a very compressed timeline.  One panelist remarked that the old way of developing product features in advance of seeking customer feedback now takes too long to get a product to market.  Enterprises doing CustDev can make that happen faster.

Most of the people I've seen attending the startup talks and meetups in San Francisco aren't very impressive.  They're either too dense to benefit from the panelists' expert wisdom or too impatient to slog it out through the long road of development.  There aren't many shortcuts in product development, and only experts can find the ones that exist.  Experts do that once they've mastered BoKs and can see intuitively how systems behave.  Come to think of it, these BoKs are the kind of multidisciplinary education that artisan designers in the maker movement need.  The San Franciscans who show up at meetups should spend less time grabbing food from their hosts and more time applying BoKs to real projects.  

Friday, February 28, 2014

Checking Out HootSuite During RSA Conference USA 2014

I didn't have time for the RSA Conference USA 2014 this week, or the dissident TrustyCon that sprang up as a reaction to the IT security sector's problems.  The only RSA-related event I was able to squeeze into my calendar was a "Connect" event from HootSuite.  I first noticed HootSuite thanks to their Ow.ly URL shortener but they have other free tools I plan to check out.  You can all check out this awesome action shot I took at the event.


HootSuite is serious enough about building its ecosystem that it offers its own tutorials at HootSuite University and is sponsoring a certification program through the Newhouse School at Syracuse University.  I expect a lot more social media marketing companies to start partnering with brand name universities.  It will be the only way traditional universities can compete with the MOOC onslaught that is revolutionizing education.

I like the concept of a social media dashboard that integrates multiple channel feeds.  The linkbait theme of this HootSuite event was that "social media managers are dead," so companies offering integrated dashboards make social media marketing everyone's business.  Anecdotes about CEOs engaging their audience through social media make for good PR but the ROI of solving one person's problem is hard to measure.

Social media marketers have said that social media should be at the top of an enterprise's purchase funnel.  I never saw a purchase funnel depicted in my MBA marketing class.  That means I got a worthless MBA, but I realized that long ago.  I should have learned from free sources instead, like this McKinsey Quarterly article from 2009 on how messaging must move outside the purchasing funnel.  McKinsey has also discovered that the networked enterprise has a clear payoff.

I had to look up a few new terms I heard at this event.  Dashboard like HootSuite's are useful in "social media audits."  A Google search of that term leads to bunch of marketing offers and this ISACA definition of a social media audit that is probably the most objective view on the subject.  The audit's use of a people / process / technology paradigm mirrors a common definition of knowledge management.  Take heed, KM folks, because you need to work with the marketing department's social media team to make sure everyone is tracking the right channels.  Someone else mentioned "social DNA" but my search results returned more stuff like a proprietary plug-in than a broad new concept.  Lo and behold, KMWorld discussed social DNA in 2013.  I like that the KM community puts its fingerprints all over these social media concepts.  The whole social DNA scheme needs a clearer definition, and I suspect it describes the extent to which enterprise search and other sharing tools have permeated both an enterprise's internal IT architecture and its corporate culture.  Every marketer should know how to measure "effective reach" and social media now extends that reach to multiple new channels.  

Forrester has a succinct discussion of the three types of social media strategies.  I had never heard of the "hub and spoke" strategy but a Google search reveals plenty of opinions on its execution.  Once again, the obvious requirement for KM integration jumps out at me from the hub and spoke model.  I think a social media dashboard that integrates well with a KM suite (namely MS SharePoint) would be awesome in an enterprise.

I had an epiphany after listening to HootSuite's executives and clients discuss the metrics they use to assess audience engagement.  Recent reports on fraudulent likes and followers in leading social media platforms have been a hard wake-up for marketers committed to effective ad spending.  I suspect that shares and retweets are far less prone to dishonesty than likes and follows, because they require users to engage with content instead of with a static social media presence.  In other words, it's easier for a paid liker in some "like farm" in a developing country's Internet cafe to like a whole bunch of Facebook pages than it is for them to share a message from that page.  It's similarly easier for a paid shill to follow a Twitter account than to retweet useful content.  That's my original insight, fellow Web denizens.  Measure your audience engagement with metrics focusing on shared content and not some static page's artificially inflated reach.  Sharing quality content really works.

The folks in attendance were mostly in their mid-20s to early 30s.  Now I know who buys all of the overpriced denim wear I see at hip clothing boutiques all over the Bay Area.  It's these young techies working for mobile startups and social media marketing companies, and they have disposable income for expensive but trashy clothing.  I filled up on free food and drink, and chatted up a bunch of attractive women.  Those are my own personal audience engagement metrics.

Full disclosure:  I have no business connection to HootSuite.  No one paid me anything to write this article.  I may use HootSuite's free tools at some point in the future.  I like free things because I'm a cheapskate.  

Monday, January 20, 2014

Risk Management In VUCA Environments

Volatility, uncertainty, complexity, and ambiguity (VUCA) characterize any competitive environment.  Enterprises that develop systems to navigate VUCA risks should be more resilient than their competitors.  I've written tons of posts on knowledge management (KM) and decision management (DM).  If I were running an enterprise larger than my one-person show at Alfidi Capital, I'd have some KM and DM practices in place.  Let's discuss a few below.

Deconstructing decision trees.  Human beings have a hard time thinking in probabilities.  Weighing probabilities and using game theory don't come naturally to most of us but they are teachable subjects.  Decision trees for project options are particularly valuable but discarded options may come into play later as "branch and sequel" plans if a selected option fails.  This is why documenting rationales for decision tree options matter. The article "The Curious Case Of A Broken Crumb Trail" from KMWorld March 2013 describes the utility of creating an Option Outline documenting the reasons why various project options were considered and discarded.  This allows project managers to revisit archived knowledge for tips on alternate ways forward if a main effort runs into trouble.  I think this approach to documentation is especially valuable for documenting options in decision trees.

KM governance of DM rules.  This is a discovery I stumbled onto last year after hearing experts describe the advantages of automated business rule management systems (BRMS) engines.  Automating every routine thing is great, and AIs can help, but human decision makers must remain in the loop at the top.  Regular manual updates to the KM collection guidance insure that managers capture results that inform strategic key performance indicators (KPIs).  Top management must publish those KPIs in media where every subordinate manager can track them.

Business continuity planning.  This should be well-established by now in large organizations but small and medium enterprises (SMEs) should also do it.  This is more than buying insurance.  A business continuity plan should start from the SWOT matrix's identified environmental threats and continue with a risk assessment of hazards graphed in a 2x2 matrix.  That's right, folks, I'm talking about severity versus probability once again.

A culture of honesty.  I thought about putting this one up front but I decided it would be more effective as my last point.  Human beings typically don't use Bayes' Theorem to update their assessments of conditional probabilities; they instead overweight the most recently acquired information.  I may have misstated that explanation in the past.  Whatever; I have to include this to salve my own conscience.  Highly ethical organizations cultivate trust vertically and horizontally, which makes sharing information easier.  It also accelerates actions during a crisis because teams that are honest and trusting won't hesitate to execute good decisions.  This is my understanding from personal experience with both trustworthy and untrustworthy people.

These principles are worth incorporating into a continuous improvement model.  Most people won't do it; they'd rather just wing it through life with no data supporting analysis.  That's totally understandable in light of human nature.  My thinking is for the handful of people on this planet who truly think for themselves and care about things that matter.  

Wednesday, December 11, 2013

Machine Learning Needs a Common Sense School

I've been hearing from thought leaders all year about how virtualization, machine learning, and M2M traffic are going to solve the enterprise problems that the cloud and Big Data have not yet solved.  That's just great.  Those things sound terrific in sales pitches to prospects in large convention center halls.  I think these thinking machines need to learn some basic vocabulary before they start speaking to us.

MIT's ConceptNet is an attempt at making machines autodidactic.  The difference between humans and AIs is that we carbon-based life forms assemble experiential knowledge into frames of reference called common sense.  Machines have not yet generated common sense because they don't experience the world the way we do, with five senses collecting rich content.  MIT allows humans to input their own interpretations of common sense into the Open Mind Common Sense Project that feeds ConceptNet.  

Contextual knowledge is good and adaptive thinking from experience is even better.  That's what makes us human.  The thing that makes us most human is our ability to reason morally.  Humans can discover and apply first principles from a variety of philosophical traditions even if those traditions are irreconcilable.  Machines can't do that yet, but giving them the ability to reason adaptively from common sense carries the risk they they will make uncontrolled decisions.  The Laws of Robotics are more useful now than ever as a moral foundation for machine learning.  

AIs can generate the data used in social network analysis but I do not believe they can actually perform the final analysis themselves.  Human analysts must complete the task.  I have blogged before that high-level strategic decisions must be left to humans and cannot be automated away with decision management rules.  Introducing AIs into routine operations is good for business if humans update the heuristics and rule engines that circumscribe their operations.  Humans need training in knowledge management, decision management, Six Sigma, and project management before they launch AIs.  Those AIs need diplomas from machine learning schools with common sense as the core curriculum.  Train the humans first so they know how to govern the machines.  Program the thinking machines with the Laws of Robotics before they begin their machine learning.  

Wednesday, November 13, 2013

Decision CAMP 2013 Gathered Decision Management Rules, Tools, Drools, and Schools (of Thought Leaders)

I had to check out Decision CAMP 2013 down at eBay / PayPal headquarters to see what has changed in the decision science discipline since I studied decision modeling in my MBA program way back in 2001.  I'm all about making good decisions and I usually make better decisions than most people.  Decision science practitioners and theoreticians came to share knowledge on how large enterprises optimize decisions.



Carole-Ann Matignon, the CEO of Sparkling Logic, informed us in her welcome address that what used to be known as "decision rules" and "expert systems" back in the 1990s are now part of the "decision management" (DM) discipline.  I began building a theory throughout this conference that knowledge management (KM) and DM should be linked in an enterprise.  Her talk on the links between Big Data and "big knowledge" clarified that DM should automate decisions that will improve profitability and decrease the costs and time dedicated to routine operations.  I get that DM practitioners should document the soft knowledge in domain experts' heads so it can be codified into business rules.  I still think the KM people who manage taxonomies for storing and retrieving said knowledge need to be involved in transforming that knowledge so users can make sense of the finished rules.  KM people don't always have the technical skills the DM people use to mine Big Data.  They have to work together.

Neil Raden from Hired Brains showed us how to compete on analytics in his keynote.  I was relieved to see that Neil broke down analysis into four types, only two of which require advanced degrees in math.  Types III and IV are the kinds of data assembly and filtering that the rest of us ordinary business types can execute.  The uber-quants in Types I and II will refine business rules based on input from the rest of us provided those rules are focused on the business' key leverage points.  Read all about this typology in Neil's blog article on "Understanding Analytics Types and Needs."  Neil is excellent at breaking down the hard work of analytics into something that business domain experts can use and his presentation on "Big Data Analytics:  The Art of the Data Scientist" is a must-read.  I agree with Neil that there's a huge opportunity for analytics in social benefit analysis and DataKind has project examples the social capital community can use for inspiration.  Neil's request that managers think probabilistically won't go over well with most humans who are wired to act first and rationalize afterwards, but that's what we have to do to remove the layers of abstraction that separate data from reality and create faulty analysis.  Neil showed us the example of Anscombe's quartet to demonstrate that applying the same decision rule to four different business cases just because they have the same regression error will cause real world problems.  Neil also advised us to use A/B testing within an adaptive model (i.e., one that applies continuous improvement) to update models because they degrade over time.  His bottom line is that "type shifting" data work from quant scientists down the chain to relevant domain experts makes it more useful because business domain knowledge matters.  This requires mentoring within the organization so that the analytics typology does not become a lifetime caste assignment.  I'm pretty sure the KM people will be doing much of that mentoring.  Neil recommended studying Daniel Kahneman's Thinking, Fast and Slow along with the Abilene paradox to grok the human factors in DM.  I'll make a mental note to read his book with James Taylor, Smart Enough Systems.

Speaking of James Taylor, he was up next on the agenda to discuss the DM journey.  I opted for this talk as opposed to the other track because I'm a domain dude, not a scientist.  I'm not ready for the double black diamond Olympic downhill ski run so I'm staying on the bunny slope.  James breaks down the DM process into discovery, services, and analytics.  Discovery means decomposing the KPIs that require decisions into identifiable points that can be mapped into matrices using scoring sheets.  I suspect that there are not many good decision templates in many verticals and that a market will emerge for flexible decision models much as the market for data models.  The services stage of the DM journey introduces the business rule management system (BRMS) that works like an IT event processing architecture.  The BRMS "rules engine" archives a log of how rules execute, allowing transparency.  A lot of BRMS design starts with data mining that produces decision trees as output and each tree branch becomes a rule.  The journey into analytics IMHO looks like the hardest part because it requires tolerance of probabilities.  That echoes Neil Raden's thoughts above.  I asked my first question of the conference about who owns the DM process, because I had a suspicion that the KM team would end up with ownership if it wasn't clearly defined.  James answered that KM owns the rules portion and IT's analytics team owns the processes.  He added that in the financial services sector, the risk management business group is often tasked to handle the processes.  I get the part about the analytics team assembling data mining tools but I say the Chief Knowledge Officer (CKO) will have to construct and monitor the analytics workbench.  I also think the CKO will have to monitor whoever owns the enterprise's top-level SWOT analysis so that the BRMS isn't just blindly dumping a "big bucket of rules" and producing GIGO.  I agree with James that DM's value comes from being able to change its processes without changing its inputs, but I'll add that the BRMS processes must support the firm's strategic direction revealed by that SWOT process.  My question scored me a free copy of James' book Decision Management Systems.  Thanks James!

Tuesday's keynote on writing concise rules was mostly over my head.  I learned about Red Hat's Drools BRMS but I noticed that Red Hat is now encouraging migration from Drools to its more robust platform.  I guess Drools is the developer community's DIY solution.  I'll leave the parsing of refraction for rule firing to those even geekier than me.  Drools code makes it easy to write set-oriented rules that are more concise and easier to execute.  The biggest point I got out of this talk is that writing concise rules reduces the time interval between firings.  This implies that more efficient rules can screen more data.  I think that matters for DM programs that screen many Small Data feeds.

The CTO panel's discussion of technical issues made it clear to me that the KM and business domain people who define rules must know some basic coding, at least until providers create BRMS products that are comprehensible to non-statisticians and non-coders.  KM pros will have to learn rules languages like Drools just to make sure domain experts can communicate with IT/s analytics teams.  Building rules gives granular insights into how data is collected and stored in data warehouses.  Analytics tasks and traffic are orders of magnitude larger than rules execution, making rules the tail that wags the dog in DM.  Decisions proliferate at the routine operational level and their effects add up in the aggregate.  Optimizing DM won't get them all correct the first time so continuous process improvement is necessary.  I had never seen rule families displayed before but that's how rules engines group rules for execution, so the panel argued for structured tables where less-technical users could build rules in business language rather than code.  You know something, that's almost like pressing the "easy button."  It's too bad that running decision tables to encourage such user empowerment is so hard.  I wonder whether big ERP providers like SAP and IBM have built such simple solutions.  I'll bet easy-to-use DM products will be disruptive in Big Data and their makers in the startup world will be good buyout targets.  Graphic DM products will fill ERP back-end gaps and manage Small Data streams from IoT deployment.  The panel mentioned that automated rules generation from data is a future possibility but it can be dangerous without deep business knowledge.  Poor data quality makes poor rules with GIGO.  I do not share the panel's pessimism that the lack of generic rules templates stems from a lack of generic object databases, or that reluctance to share proprietary data gives away advantages.  Facebook and Google have published tons of white papers on how they solve technical problems.  I think there's more disruptive opportunity available in building abstract logic templates for industry verticals.  This will probably work best initially for open source systems like Drools or Hadoop to prove the approach is viable.

KPI's talk on using The Decision Model (TDM) in BRMS was more bunny-slope speed effort for non-coders like Yours Truly.  Here's my translation.  Domain people build business process models separately from the business logic that IT analysts build.  These process models produce complex flowcharts and DM consolidates many choice points into simpler flows that manage sequences.  We design DM to identify those choice points that channel a business process into a completely automated channel.  The rule family is a two-dimensional table where several conditions lead to a conclusion.  Business logic sets the conditions within that table.  These rule families link to inferential relationships that that lead to logic-driven decision that are worth managing.  Each business unit tasks its domain subject matter experts to whiteboard skeletal decision models until the unit's business rules are built.  The models are finished when all data needed to fill them is available.  The IT analytics people will then populate the model with Small Data streams.  TDM is a descriptive way of governing this whole process by assigning responsibility, determining workflows, and seeking managerial approval with change documents.

OpenRules presented on building smarter decision models.  The philosophy behind rules-based optimization reminds me of the PERT/CPM coursework I completed many years ago as US Army second lieutenant.  I didn't quite get the stuff about constraint satisfaction problems or the JSR 331 standard.but people more skilled than me use them to solve cost functions in resource allocation problems.  If only I knew some Java I could use these tools to calculate simple data points from automatic feeds that I could display live on my Alfidi Capital site.  OpenRules' Rule Solver has an Excel component that may be more suitable for my needs.

Sparkling Logic's adaptive decision management talk revealed that decision goals are sometimes contradictory.  The tradeoff between constraints and speed requires DM to either predict and optimize or learn and adapt.  The version of A/B testing known as "champion/challenger" testing determines whether an existing model should adapt to revised conditions.  Well-designed challengers move the champion towards its optimal value.  IMHO Big Data will generate huge amalgamated streams.  Adding an adaptive DM layer to all algorithms underlying decision trees and other products will enable the algorithms to adapt in real time to changes in the underlying data.  This moves the DM paradigm closer to the automated rules generation that one of the earlier panels felt was unfeasible.  I also think that identifying errors from adaptive learning shows how analysis can get out of tolerance and produce sub-optimal solutions.  Correcting mistakes keeps business processes on track to an optimal state.

I missed the final Tuesday panels due to a prior commitment but I returned Wednesday for the financial services presentation track.  The brave new world in finance for DM started when GSEs issued multiple changes to their automated systems and regulations in recent years.  Using DM methodologies can help alleviate bottlenecks in mortgage underwriting and enforce quality control standards GSEs will accept.  When I heard someone advocate a rules center of excellence in an organization to break silos, I immediately thought of the CKO's role.  KM reps will have to play a role by reviewing rule documentation standards so that rules are speedily adopted into engines.

PayPal had something to say about detecting and stopping fraud.  Different risk sources (credit cards, account hijacking) imply different rule families are needed for each logic engine that screens each Small Data stream.  PayPal promoted its rule writers from domains and gave them technical training, which proves my earlier point that domain experts must have some technical skills like coding to serve on cross-functional teams.  IMHO no matter where you work, you must apply ROI to DM rules.  Comparing money spent on DM to the revenue loss avoidance likely from a DM effort determines whether it's worth making the effort to catch fraud in an uncontrolled domain.

The talk about insurance claim fraud once again proved that companies traditionally deploy a BRMS solution separately from an analytics solution.  The new trend is that DM increasingly consolidates rules and analytics into linked suites of solutions.  I infer that this is a prerequisite for integrated DM suites to plug into the next generation of ERP systems.  IMHO there must be disruptive gaps where DM solutions link to ERP solutions.  If Salesforce and other SaaS providers don't offer DM solutions, entrepreneurs can create tech that closes these gaps and solve pain points.

The scheduled FannieMae presentation on rule management was cancelled, which is fitting because FannieMae and all other GSEs deserve to be cancelled out of existence as punishment for their roles in blowing up the housing sector.  I wandered into the health care track's presentation on complex event processing (CEP).  I discovered that Drools Guvnor is considered to be a KM system.  Local groups' rule writers can upload their own knowledge definitions.  This kind of knowledge engineering transfer of human understanding into AI systems is the gold standard for the KM-DM interface.  IT pros need tools like Apache Maven and Git to track project workflows as the AIs are built.  I asked whether Drools Guvnor could interface with SharePoint, the only KM tool family I've used.  The answer I got was that there's no direct connection but users can build web-based interfaces.  I sure would like to see them used in tandem with a linkage.

I returned to the finance track to hear NASDAQ OMX talk about how they use BRMS to obtain a competitive edge among stock exchanges.  Their customer segments have different trading needs and the exchange used multiple steps to determine fee changes before implementing BRMS.  It's good that they compared the ROIs of the old and new ways of doing things but I think a more appropriate apples-to-apples comparison would find the old ROI of the old data and old rules together.  Historical data does matter but the new rules generate their new ROI from new data, not old data.  New data/rules versus old data/rules is how I would have framed the comparison but I don't run an exchange.  The NASDAQ guys think rules can identify their most profitable clients and potential growth segments because new pricing strategies drive frequent rules changes.  This talk made me reflect on my fixation with SharePoint and why I still think it's relevant.  Domain rule builders can post their updated rules tables to SharePoint.  That's how DevOps people can easily pull them and re-map data flows into the rules.  See folks, it really does work when you plug KM tools into the DM development process.

The finance track ended with a panel of all of the track's speakers.  Compliance really maters because the exchanges must maintain archives of their rules and metadata of the rules' authorship histories.  Moving data storage to the cloud saves money by allowing the IT department to decommission old data warehouses.  They shared good stories but I came away thinking about how a financial services firm would use DM to innovate.  I gather that DM enables reactions to competitors' moves by driving rules changes.  My concern is how external scanning translates to a strategy pivot and business unit directives.  Who in the enterprise is charged with using SWOT as a scanning matrix?  Who updates the balanced scorecard?  These are integral to ensure that DM reacts appropriately to environmental changes.  Where should the DM center of excellence (COE) reside?  Under the COO or CKO?  It's mainly an optimizer for internal operations but it needs KM input and governance.  In financial services, DM-COE is probably in the Risk/Compliance business group.  Is the CKO important enough to be on equal footing in the C-suite with the COO and CIO, or should the CKO be subordinate to the COO?  What are the professional associations for DM practitioners?  Decision CAMP may be the first purely DM body.  DM combines operations research, KM, quality assurance, and IT functions so it may warrant its own professional home somewhere.  Check out the BPM Institute, the Decision Analysis Society of INFORMS, the Decision Sciences Institute, the Business Rules Group, and FICO's Decision Management Community for the foundation knowledge of this discipline.

The final speaker discussed technical work in Grailog visualization that was way beyond my understanding.  The bottom line is that Grailog combines semantic expressions and social expressions.  In Web 2.0 usage it connects people to data and creates contexts for both of those sets.  I couldn't begin to explain graph inscribed logic if I tried so I let my mind wander back to business topics during the presentation.  Someone should write the DM version of Cloudonomics so the KM and IT communities can see basic ROI calculations for setting up rule families and templates.  I let my mind wander some more and started thinking about all the hot chicks who will be attracted to DM careers once they find out that Yours Truly, the epitome of manliness, can speak articulately about the topic.  After all, gorgeous women just can't help themselves around me.

This was a pretty mind-bending conference.  I got exposure to subjects that have really evolved since I first calculated posterior probabilities for a decision tree and worked out utility functions over a decade ago.  I may not need to learn to code after all if these DM tools keep evolving to the point where tables and charts allow domain inputs in plain language.  Decision CAMP kept me well-informed for three whole days and well-fed most of that time with free food.  I'm definitely adding the DM sector to the bodies of knowledge I actively track.  

Saturday, November 02, 2013

The Haiku of Finance for 11/02/13

There goes all knowledge
Lost in some dusty archive
Retrieve it for cash

Open Source Knowledge Management Should Leverage GIS

Embedded data is the future of all research, analysis, and publishing.  There's no going back.  Photos and maps that are loaded with narrative timelines make enormous sense and confer strategic advantages to enterprises.  The best geographic information systems (GIS) applications turn multiple data streams into visual layers that make comparison easier.  The best enterprises leverage their KM people and systems to share GIS data with everyone.

The US government's openly available GIS tools are impressive.  Uncle Sam's GeoPlatform and National Atlas apply the US government's open data rules.  They also apply the Open Geospatial Consortium (OGC) standards for publishing map-embedded data.  The Federal Geographic Data Committee (FGDC) makes policy via its National Geospatial Advisory Committee (NGAC).  It publishes the GeoPlatform and plans the development of the National Spatial Data Infrastructure, which presumably will be compatible with a Global Spatial Data Infrastructure.

The US government has a decent roadmap with its Presidential Records Management Directive (PRMD) and NARA keeps the government's internal KM pros updated with its Records Express blog.  Private enterprises often outsource their archival functions because that's not a core function, but defining the terms for retrieval is within a Chief Knowledge Officer's authority.  The point is that KM should dig through digitized archives for those records that show changes in locations over time.

KM turns archive management into content management when CKOs adopt OASIS's Content Management Interoperability Services (CMIS) and Dublin Core Metadata Initiative (DCMI) protocols.  CKOs must know these standards and ensure that their contracted records management vendors apply them.  The CKO has the lead in defining the KM taxonomy for implementing these protocols within the enterprise but must work with the CIO to ensure embedded data products - including GIS displays - are sharable and retrievable.

Geospatial data easily integrates into knowledge management platforms with tools like ArcGIS's Esri Maps for SharePoint.  The availability of said knowledge for common users only becomes actionable when KM pros install these tools within ERP systems and evangelize them to enterprises.  KM and IT pros can make magic happen and the how-to steps are available for free.  AIIM has tons of free guidelines for optimizing content and managing archives.  This stuff may sound arcane but I've discovered many times that very relevant information is buried in obscure records and is still actionable long after its creation.

I discover these tools because I seek a competitive edge over other financial analysts and business leaders who don't do their homework.  I hear thought leaders at tech conferences push the integration of mobile, cloud, and Big Data.  I believe that embedded data in sharable GIS content is the killer app that will make that big tech dream come true.  You heard it here first at Alfidi Capital.  

Friday, October 11, 2013

Alfidi Capital Drops Into DataWeek 2013

Life for Yours Truly isn't all about slogging through investment conferences.  I throw in the occasional trade show in the hope I'll learn something new and meet attractive women.  DataWeek 2013 and API World fit that bill for me last week.



I didn't have time for the entire conference because it's really a code developer's show and I'm Mr. Finance.  I could only attend a select few sessions because I was too cheap to pay for anything but an Expo pass.  I noticed that many of the paid seminars had no gatekeeper at the door, so anyone could have walked into a paid session.  I hope the conference organizers weren't losing money from freeloader coders wandering around Fort Mason Center.

My first free session was an overview of how SQL queries and Java apps running in a Hadoop ecosystem feed Big Data streams into ERP systems.  I don't know code but I know a business problem when I see it.  Hadoop apps interact with both transactions and analytics, completing the CRM / Big Data / ERP integration that so many IT pros say they want.  This leads me to the belief that knowledge management pros must understand Hadoop.  There is no other way for them to ensure that the enterprise's business intelligence effort and predictive analytics are available to the C-suite.  That's how tools channel data up the decision chain to support strategy.

Hadoop reminds me of Salesforce's approach to offering cloud ERP solutions, except Hadoop solutions are built on demand.  Both types of solutions can be scaled down for small enterprises that can't afford big iron computing.  My first take-away from DataWeek is that startups doing manual, customized apps or analytic solutions for B2B customers will fail.  Data ecosystems like Hadoop are growing too large for anything but machine learning that automates analytics.  Only top-layer KM, heuristics for predictive analytics, and corporate strategy should be subject to manual manipulation.  The KM focus should be on structural design, with decision criteria that trigger manual intervention.

I patrolled the Expo floor prior to the heavily advertised beer tasting.  I was pleasantly surprised to see several attractive women staffing the booths.  Disqus gal, you were the best, and thank you for filling out those jeans so well.  The women attending were also cute.  It's good to see more women pursuing tech careers.  They should pursue me too but I understand if they're focused.

I attended a talk from IBM Analytics about the most important factors in data visualization.  I started visualizing what the hot blonde woman in the back row would look like at my place but then I remembered I have a blogging job to do.   It turns out there are four pillars governing how analysts should publish visual representations of data:  purpose (why), content (what), structure (how), formatting (everything else).  Knowing the purpose for assembling a presentation lays the foundation for an iterative design checklist.

Purpose for me means knowing what actions I want to prompt in my audience, such as uncontrollable rage at my opinions.  Content covers the data and relationships that matter to me and my audience, like IQ comparisons that demonstrate my intellectual superiority to most of humanity.  The IBM guy showed a simple bar chart of how Apple's iPhone revenue was greater than all of Microsoft's revenue, so even two data points can show something powerful.  Structure governs the meaning of axes and layouts that reveal relationships, like my relationships with numerous female admirers.  That would take too much time to graph so I'll just relate IBM's best practices here.  Using 3D graphs distorts data and you can't see intersections with axes when the front 3D thingy obscures what's in back.  The human brain is better at comparing angles than circles, so charts with long bars on a common baseline are easier to perceive.  Horizontal bars showing binary (either/or) breaks in continuous data frees the vertical axis to sort categories by ranking.  Finally, formatting data to show what's important accounts for how users will consume the data.  Callouts in geodata reveal something that matters.  Using color to indicate quantity forces readers to keep glancing at the legend, which is bad.  Color also has cultural context; red-/white/blue means something in the US and France.  "Redundant encoding" puts the same data into different visual channels, driving the point into viewers' brains in case the format doesn't translate perfectly.  Labeling highlights things, so either make it possible to label outliers (if they reveal something significant) or use them to orient viewers when you label axes.  IBM's white paper on successful visualization sums up all of this stuff and so does the author's handy diagram.  This knowledge makes me think about the graphs I've seen in oil and gas investment presentations.  Formatting the wells' decline rates means everything if scale de-emphasizes the steepness of the decline curve.  Readers fooled by logarithmic scales may think declines are gradual in shale wells.  I'll look for scale the next time some shale energy promoter shows me his projections.

The only other seminar I had time to check out was on how security architectures can be unified across mobile, cloud, the Web, and APIs.  Typical APIs aren't exposed to the general public and API security is enterprise-specific.  The security landscape is so broad that it open up APIs as a potential portal for intrusion.   Enterprises that support tiered access from public users and internal users (like financial service institutions) must support multiple credentials.  I think authentication protocols will have to be built into APIs prior to launch, just like analytics.  APIs must do much more than run apps.  They collect use case data, feed analytics, provide a security barrier, and define the user experience.  Integrating all of these things will require more effort from programmers than ever.  I'm glad I'm not a programmer.



I took the above photo at the Rackspace / Codame / Geekdom SF Dataweek afterparty.  The photo captures a computer-generated video with parts of the image melting and morphing.  I'm the dark suit and white shirt in the middle foreground holding my camera.  One aging hippie near Fort Mason maligned me as a "suit" while I walked to the conference center that day and I took it as a compliment.  I represent money, knowledge, and style, thank you very much, and that's why hot women gravitate to me and not aging hippies.  The Rackspace party reminded me of the articles I read in the late '90s about dot-com startups living wild on VC money in San Francisco's SOMA district.  This afterparty was my chance to relive what I missed by not being around for that scene.  It looked more like a rave, with a contingent of "club kids" in high-soled shoes and glittery leggings.  I noted several of the hot Expo booth babes letting their hair down but I was too busy admiring the tech on display to get their hotel room numbers.  Twentysomethings owned the future, in the '90s and today.  I'm not in my twenties anymore but I own the future now.