Mario Lorenzo's Technical PoV

This blog contains my technical point-of-view on a wide range of topics from technology, leadership, and business. This blog will cover a diverse set of topics and jump from general overviews to deep dives. Your thoughtful comments are always welcomed.

Classifying Relations using Recurrent Neural Network with Ontological-Concept Embedding

Relation extraction and classification represents a fundamental and challenging aspect of Natural Language Processing (NLP) research which depends on other tasks such as entity detection and word sense disambiguation. Traditional relation extraction methods based on pattern-matching using regular expressions grammars and lexico-syntactic pattern rules suffer from several drawbacks including the labor involved in handcrafting and maintaining large number of rules that are difficult to reuse. Current research has focused on using Neural Networks to help improve the accuracy of relation extraction tasks using a specific type of Recurrent Neural Network (RNN). A promising approach for relation classification uses an RNN that incorporates an ontology-based concept embedding layer in addition to word embeddings. This dissertation presents several improvements to this approach by addressing its main limitations. First, several different types of semantic relationships between concepts are incorporated into the model; prior work has only considered is-a hierarchical relationships. Secondly, a significantly larger vocabulary of concepts is used. Thirdly, an improved method for concept matching was devised. The results of adding these improvements to two state-of-the-art baseline models demonstrated an improvement to accuracy when evaluated on benchmark data used in prior studies.


Validation and Verification of Clinical Decision Support Systems

Systems that make decisions require validation and verification. Making sure the system is doing the right thing (designed to do what the user expects, i.e. validation) and doing the “right thing” right (doing what it was designed to do, i.e. verification) is integral, especially for mission critical systems that may affect human lives. Adopting a Quality management system that ensures that verification and validation is conducted at all phases of the development lifecycle and that deliverables at each stage of the lifecycle are reviewed and approved will help catch misunderstandings, unexpected behavior, errors, and even malicious intent.

These steps would require formal requirements to be documented from a user’s point-of-view helping to ensure the user understands how the system is expected to behave. Having formal review and approval of changes made to requirements with all parties including customer and sponsor users is critical. Including the end-users as part of an iterative development cycle is also a key way of collecting feedback and validating the behavior of the system. Producing design documents that can be reviewed, verified, and approved to ensure that the design accounts for not just the functional requirements of the system, but considers other non-functional requirements such as security, reliability, maintainability, scalability, performance, flexibility, usability among other quality factors etc.

Implementing code that realizes the design and is code reviewed and approved by QA. This also requires assessing the correctness of the system to ensure that under different test scenarios, the system provides expected results. Many of these kinds of tests can be automated to provide regression test suites that codify system expectations and can help catch deviations when they are first introduced. Beyond the release and deployment of a system into production, monitoring for issues such as security, performance, and quality issues is important. These issues should be captured and require resolution. Critical issues such as leaking confidential information or an outage of the system requires additional scrutiny of the design.

AI and methods like machine learning or statistical algorithms complicate the ability for a software engineer to produce deterministic behavior of their system. An example of this could be a system that produces a decision on whether to approve or deny an insurance claim. Because of the nature of many of these ML algorithms, there is no supporting evidence produced for the decision. This typically requires a parallel system that attempts to collect supporting evidence, but in fact is not necessarily what drove the ML model decision. In the world of healthcare this is a very controversial issue especially with government regulators that require audit trials and decisions rationale to be preserved. One step that can be taken is to allow a system to make certain types of decisions that have less impact and require a human reviewer on those that have high impact. For example, being approved for a medical procedure can be decided by the system alone but being denied the procedure requires a human approval.

Parting thought, one could conceive that it may be possible to construct system constraints apart from the application code, encapsulated in the form of aspects, used to enforce policy in the system. Such a system can be configured to prevent certain decisions to be made through a centralized policy management system. This could provide a mechanism used to incrementally add constraints to the system without affecting the rest of the application.


Search Poisoning Attacks and Blackhat SEO

Search poisoning is a deceptive technique used by blackhat SEO, where malicious sites exploit search engine ranking by presenting web pages that appear relevant to popular keywords with the goal of having their web page ranked high within search results. Once an unsuspecting search user clicks on the result and is directed to a landing page where they are quickly redirected to a series of other web pages before arriving at a terminal page. Once they have arrived at the terminal page, the user may be presented with a scam soliciting personal information or an option to download fake anti-virus or other malware. To accomplish this, an adversary may hijack a compromised web site to leverage a legitimate web site’s reputation rank to boost the web page rank within a result. Another method is to conditionally serve web pages based on whether the requestor is a web crawler or someone referred from a search result. This allows the adversary to present a web crawler with a page that appears relevant to the keywords it targets but serve a different malicious page to a visitor who clicked on a search result and arrived at the landing page. The technique then redirects the user on to other web pages depending on whether there are affiliates involved in the search poisoning campaign. Eventually the user arrives at the malicious terminal page where they are presented with malware or other scams. continue reading


Privacy-Preserving Methods using Encrypted Data for Neural Networks

The use of real-world data to train machine learning models is complicated by the presence of sensitive data, such as personal identifiers, health, and financial information. This sensitive data is typically regulated, and unauthorized disclosure or leakage of this sensitive data may result in severe penalties or criminal prosecution. As such, data owners require that the sensitive data be fully de-identified (i.e. redacted) prior to the commencement of training classification activities by a 3rd party vendor. This de-identification process is of course very costly due to the amount of time required to properly redact all sensitive information. The process is also prone to human error where unintentional disclosure of sensitive information is leaked. This is the central issue addressed by the study of Privacy-Preserving methods.

The goal of this survey is to thoroughly review and assess the current research and state-of-the-art for a specific subfield of privacy-preserving methods that focus on the training and classification phases of machine learning algorithms. These methods provide confidentiality of sensitive data during the training and classification phases of ML models using Homomorphic Encryption.

continue reading


IoMT Implanted Medical Device Security

Implanted Medical Devices (IMD) provide an effective means for monitoring the health of patients and enabling the practice of precision medicine that can yield a more informed diagnosis and optimized treatment for patients. These devices can participate within a network fabric that can communicate with other implanted devices or with external controllers outside of the body. These external controllers can collect and transmit data between these devices and remote servers that process the data. These devices are collectively known as Internet of Medical Things (IoMT) devices. IoMT devices, like other IoT devices, are vulnerable to security attacks. One such vulnerability involves malicious devices eavesdropping or hijacking implanted devices with the intent to steal information or harm the patient. The ability to authenticate devices within the IoMT fabric is an important aspect of security that helps mitigate many of these threats. continue reading


Threat of Covert Channels

The ability to secure an organization’s IT systems and network resources is integral to safeguarding the organizations sensitive information. Securing an organization requires a comprehensive approach that secures all layers including physical location, hardware, software, networking, and personnel. The classic Rand report identifies the areas of vulnerabilities throughout these layers of a computing environment as leakage points. The report discusses the various threat vectors that system designers should consider when designing resource-sharing systems. One such leakage point involves the use of covert channels used to transmit information using unconventional methods through optical, thermal, or acoustic mediums. This technique involves the use of devices such as LEDs, speakers, displays, or CPUs to modulate a signal that is received by another device that may be disconnected (i.e. isolated) from the network. Secure computing networks, such as the Joint Worldwide Intelligence Communications System (JWICS), used by military and government agencies to share sensitive information are typically isolated from other networks such as the internet. Some very secure systems are physically isolated from all other systems, referred to as air-gapped, because there is no network connection connecting them to another computing device. For intruders to exfiltrate information from these air-gapped systems they must use covert channels such as hard-drive LED, LED displays with steganography, or Smartphone display brightness and volume.... continue reading


Aspect-Oriented Framework for Ontology Modularization

Ontologies traditionally have served the purpose of representing knowledge by defining the relevant concepts and relationships for a particular domain. Today, ontologies also serve as the foundation of knowledge for AI and Cognitive systems. These ontologies can quickly scale in size and complexity requiring additional system resources and reducing reusability. The technique of Ontology Modularization attempts to segment the ontology into smaller reusable modules that improve the reusability of the ontology... continue reading


Ultra-IoMT: An Ultrasonic Intra-Body Area Network for Implanted Medical Devices

Modern Internet of Medical Things (IoMT) implanted devices require a reliable and safe communication in order to coordinate between sensors and actuators within the human body. Prior research has demonstrated that using ultrasonic waves is a safer alternative to radio frequency (RF) when operating near human tissues for an extended period. These concerns are recognized by government agencies and standards institutions such as the FCC, FDA, and IEEE. Recent research has demonstrated the feasibility of ultrasonic communications within the human body as a safer alternative... continue reading


Relation extraction using Deep Multi-channeled RNN/LSTM with an Ontology embedding layer

Relation extraction and classification is a common Information Extraction task involving the linking of entities that participate in a semantic or syntactic relationship within a span of text. The process of relationship extraction is typically dependent on other NLP tasks such as entity detection, part-of-speech (POS) tagging, or syntactic parsing. There has been significant work resulting in numerous methods that address this problem. These methods can be categorized into knowledge-based, supervised, and semi-supervised (Etzioni, 2008).

The critiqued paper by Lamurias et al ( 2019) proposes an extension to previous work by B. Xu et al (2018) and Y. Zhang et al (2018) through the introduction of an ontology embedding layer added to a baseline model previously proposed by (Y. Zhang et al, 2018) which produces dense vector representations using a sequence of ontological ancestors for concepts resolved from entities within each input span. This approach leverages an RNN/LSTM featuring a multi-channel stacked neural network architecture that combines multiple embedding sources including the proposed ontological concept embedding method (further elaborated within the Methodology section of this critique). The hypothesis for this work is supported by the success of current state-of-the-art relation extraction methods that leverage LSTM along with Word embeddings layers (Mikolov, 2013; Pennington, 2014). Lamurias et al postulate that an ontology (or concept) embedding layer can provide additional domain-specific information that is otherwise hidden in the training data. This insight stems from both the well-known success and limitations of word embeddings ... continue reading


Improving Reliability for Ultrasonic Implantable IoMT Networks through a Robust Transport Layer

Modern Internet of Medical Things (IoMT) implanted devices require a reliable communication channel in order to coordinate between sensors and actuators within the human body. Existing protocols for IoMT devices provide for Physical and MAC layer definitions for point-to-point communication between nodes in an ad hoc network. Due to limitations of the physical layer, which uses ultrasonic frequency bands, nodes must be within 10cm in order to propagate through human tissue (Santagati et al, 2017). This requires a distributed routing scheme by participating nodes in order to communicate over longer distances by forwarding packets between nodes in an ad hoc network (Demirors et al, 2016). The absence of a robust transport layer within IoMT networks and protocols results in an unreliable end-to-end transmission across a multi-hop route within the network. The goal of this research is to explore the applicability of a transport layer that provides error control and retransmission within an IoMT Ultrasonic network. The aim is to examine the existing protocols such as the UsWB Protocol proposed by Santagati et al or IEEE 802.15.6 and improve overall reliability of communication by providing guaranteed packet delivery across end-to-end routes within a Body Area Network (BAN). In completing this research, the successful outcome should demonstrate that the addition of a transport layer that provides error control with retransmission improves QoS metrics when compared to existing approaches.


Review of Relational Methods for Information Extraction

This paper will review a novel approach taken within the field of information extraction based on the relational database model. Current methods and frameworks for information extraction make use of grammar-based languages, such as regular expressions, in the form of rule patterns that are applied sequentially to unstructured text, such as a document, in order to produce spans of text that represent some meaning within a domain. These grammars are typically applied in a cascading fashion where rules are applied in a priority order over the result of a previous rule. This approach is known to become very difficult to maintain and scale as the rules for the information extraction tasks grow. The performance of this approach quickly degrades as the document sizes increase. This paper will survey the use of a relational model for constructing SQL-like statements to represent information extraction rules. These statements can then be translated into an execution plan where additional optimization is performed using well-known query optimization algorithms producing an optimized execution plan.


3D Graphics on the Web using Three JS and WebGL

After the introduction of HTML5 Canvas support and WebGL, the ability to render complex 3D graphics has become a platform of choice for animation and game development. Frameworks such as Three.js have provided friendly APIs and abstractions for developing these 3D graphics. After a thorough evaluation of ThreeJS and WebGL, it is clear that this could serve as the framework of choice for rendering complex visualization, such as scientific data or supporting data analyst in understanding and extracting insights from big data. The following are some examples I created using ThreeJS and WebGL:

Related articles:


Review of Real-time Volume Rendering for Medical Imaging via Web Browser

The scientific fields of Medical Imaging and Meteorology have been pioneer and driving force behind active research in the areas of 3D visualization. Today, tools that provide visualization for medical scans such as CT and MRI or forecasting models that rely on Doppler radar serve as the primary tool for radiologists and meteorologist. Both of these professions base their decisions almost solely on their interpretation of these images. For Radiologist, the advent of CT and MRI scans provides data that can be rendered into a 3D form allowing a radiologist to observe the anatomical structure of the body and identify abnormalities that assist in their diagnosis of a patient. A Meteorologist interprets the results of radar scanning to understand the magnitude and trajectory of a storm. Both of these professions make critical decisions that impact human lives. Therefore, improvements to their visualization, such as improved fidelity and dimensionality, can improve the accuracy of their decisions. Furthermore, the democratization of access to modern medicine through telemedicine has allowed experts to consult and collaborate globally with their patients through web-platforms (Min et al, 2018). The paper by Congote et al, provides a technical solution for bringing 3D medical (and radar imaging) visualization to the web.

Related articles:


IBM Announces IBM Watson Annotator Service for Clinical Data

This month IBM announced the availability of our latest cognitive services show cased during the HIMSS conference. These services are offered as part of the Watson Platform for Health which provides an ecosystem of capabilities for running Healthcare applications in a PHI safe environment with access to a collection of cognitive capabilities previously reserved for Watson solutions, now available to Application developers. As a technical lead for our Watson Health Cognitive Platform, I had a chance to see this evolve from a vision into a concept and now a full production SaaS offering bringing cutting edge Clinical NLP capabilities into the hands of healthcare application developers everywhere.

Related articles:


Mayo Clinic's use of Watson's Clinical Trial Matching

Earlier this month during the HIMSS conference, Mayo Clinic published their clinical study demonstrating the impact of Watson Health's Clinical Trial Matching solution. The findings demonstrated an 80% increase in enrollment of patients into clinical trials during an 11 month period. This is exciting news and shows the promise of applying Cognitive to healthcare challenges. The CTM project continues to grow and include additional support for other diseases. I am proud to have served as a technical lead for this project which applied many lessons learned from earlier cognitive solutions and led to our current Watson Health cognitive platform architecture.

Related articles:


Addressing Security Concerns with Aspect-Oriented Design

This paper presents an aspectized solution for addressing security concerns in an existing system. Security as a non-functional requirement tends to be overlooked or deferred until after design and implementation phases of a project [4]. When applying Object-Oriented Design, system decomposed occurs around a single dimension. This typically revolves around a focus on process or data flow resulting in other dimensions such as security, performance, or logging to cross-cut the primary dimension of decomposition [2]. Security concerns affect the broader system with complex logic that requires frequent updates and maintenance increasing the risk of defects. Because security patterns increase tangling and scattering by increasing dependencies between objects in the system, addressing this concern in an aspect-oriented approach provides an elegant encapsulation of the security mechanisms and produces better modularity across the system [7].


Aspect-Oriented Whiteboard Pattern

The Whiteboard pattern takes advantage of the centralized OSGi Service Registry as an intermediary that can be used to decouple the listener from the observable. In addition to reducing tangling, this also achieves the desired effect of allowing the listener services to run through its lifecycle starting and stopping without holding. As a result, this allows for better memory management, since garbage collection can reclaim memory from stale listeners that are no longer used.

Unfortunately, Whiteboard pattern introduces a lot of dependency on the OSGi framework itself but requiring all the elements of the Whiteboard pattern to interact with the service registry at various points in the lifecycle. This produces additional unwanted coupling and complexity. This paper proposes an aspectized implementation of the Whiteboard pattern that preserves the principle design elements while simplifying and decoupling the pattern elements from the OSGi framework.


Automated Ontology Learning Algorithms and Methods

Ontologies are a vital aspect of any cognitive or expert system seeking to provide semantic understanding of a given domain. This paper will review the current methods for automating the extraction of ontology from a corpus of textual documents through the use of various learning algorithms presented from a collection of papers on this topic. The aim of these methods presented is to produce a conceptual hierarchy along with semantic relationships for defined set of concepts given a collection (corpus) of domain specific documents


Doppio: Browser Execution Environment for Java

This paper review the Doppio Runtime Execution Environment for Web Browsers allowing for high-level programming languages to be directly interpreted by the browser without alterations.


The Technology-Productivity Paradox

Technology is what has allowed the Human race to evolve and improve its quality of life and work condition. For many thousands of years after the discovery of fire and wood and stone tools, the evolution of technology went silent. Then the discovery of iron and the ability to construct metal devices (such as weapons or tools) leaped humans further ahead. Then again another silent period where no real invention or innovation occurred until the industrial evolution where steam and steel would propel the human race into 200 years of explosive innovation that eventually led trains and automobiles and later to communication and electronics such as calculators and computers and finally the invention of global networks such as cellular and internet and the raise of ubiquitous mobile devices that continue to dominant our daily lives.

These bursts of new technology waves are examples of major transformations that had a positive impact on our daily lives and work. However, technology is not always positive or efficient. This phenomena is known as the “technology productivity paradox”. This paradox describes technologies role as having a disruptive and sometimes negative impact on our daily lives and work environment.

Take for example our cars today. They are now instrumented with thousands of digital sensors managed by electronic controllers (ECU’s) that typically interface with some touchscreen device on your dashboard. Cars now have hundreds of features that are made available and controlled through these computers. You also may have noticed that now your cars buttons and controls such as volume or changing radio station no longer respond instantly. Sometimes they are disabled or inaccessible, such as when I am in reverse and the rear camera view is on the screen. Maybe you were on your mobile phone and experienced wanting to lower the volume or turn off the radio as soon as you started the car but could not. Or you are trying to play music from your phone and you are fumbling to get the bluetooth connected.

Almost certainly you can recall a time when your company adopted a “new system” that was supposed to make everything easier and more efficient but instead resulted in lots of training, it was difficult to use, maybe because it had too many features (feature overload) and it was not intuitive so it took some trial and error to figure out how to accomplish your work.

Even our phone connections are less reliable. At some point, U.S. telecommunication companies achieved 99.999% availability (called “five 9”). This meant that you would likely never experience a connection drop or failure in your lifetime. This is no longer the case. These land lines today are now using VOIP/SIP over internet connections that are no where near as reliable and it is the primary reason why once a week I experience a connection issue with my office land line

In their research on this topic, MacDonald, Anderson, and Kimbel in their paper “Revisiting the Productivity Paradox of IT” conclude that there was no correlation between spending on computer systems, company profits, and productivity. This was a counter intuitive finding that led them to further research the causes for this phenomenon. They found that the return on investment (ROI) for any computer system was about 6% of the amount of the investment. This is at least an order of magnitude less than what most computer system optimizations or improvement projects promise to deliver to the company.

If you reflect on this for a moment, you’ll realize that your daily job only gets more and more complicated the more systems that are introduced into your daily work process. At best it may provide a marginal improvement, at worst it could make it very difficult to accomplish your work and it always comes with some kind of overhead cost for education and training of the workforce and migration from an existing system into then new system.

Things have gotten so bad that Ethan Dreyfuss and Andrew Gadson et al claim that “American Factory works, but the American Office does not”. They assert that the amount of overhead and complexity introduced into most office environment is now so disruptive that very little work is accomplished relative to the work force investment to keep the operation running.

So what does this mean ? When considering future investments in new systems, make sure it is driven out of a specific business need and clearly provides additional business value (not just equivalent value). Adopting or upgrading to a new system for the sake of it being new is simply not a sound reason and will most likely continue to propagate the paradox. When working with vendors or consultants, make sure a clear cost-benefit analysis that includes not just the initial cost but the ToC (total cost of ownership) and overhead that is typically not mentioned in cost estimates.


The 3 C's of Leadership

There our countless books and theories written on the broad topic of leadership. Here I offer three important elements that any leader should consider and periodically assess how well each is being conveyed to the team.

The three C's consist of Change, Challenge, and Clarity. Many successful leaders are already addressing each of these but may not be aware of it. Others may be lacking one or more of these.

Any organization, team, or project that is not undergoing change risks stagnating and ultimately loses excitement, competitiveness, and eventually leads to failure. Therefore, any good leader should constantly look for opportunities to change or improve their process, methods, tooling, and team structure. According to Bill Gates companies are either spiraling upwards or spiraling downwards but they are never standing still. This means that if you are not changing then you are actually spiraling downward. Change can be received differently depending on the team culture or the area of change. Humans have a tendency to establish routines when possible and when something is not causing pain or problems, we tend to not want to change them. Engineers are infamous for "if it's not broken, don't fix it" mentality. Overcoming this mindset can be difficult, but if the team gets into a habit where periodically (maybe every couple of weeks or monthly) they purposefully undergo very specific changes with the intent of improving their work process, this will make it easier to adopt change. In other words, the more we undergo change, the less painful it is to change.

Humans are curious creatures, we like to explore and learn. We also like to establish goals and work towards achieving these goals. Challenging the team with specific targets or goals is therefore a very fundamental motivator. We like to have goals because it allows us to make decisions on priorities of work or resource allocation. Teams and individuals that are challenged also tend to be motivated and therefore results in lower rates of attrition.

A few years back a couple of friends who worked as engineers in a medical device company that produced various instruments for DNA sequencing and cell counting. Their company underwent some ownership changes and later changes to their leadership. They spent an entire year without being challenged by their new leadership. There was no new goals or targets or products to be launched etc. They were to only support the existing products in the field in the case that any issues arose and spent most of their day idling. Turns out that being paid to essentially not work was bad. The company suffered major turn over. My two friends both left to another company within a year of that leadership change. They simply couldn't stand by and watch their technical vitality and skills drift away.

The final 'C' stands for Clarity. Providing clarity in your vision, strategy, execution, and process is vital in order to ensure everyone clearly understands what the challenge or mission is and the way they are going to get there. Many organizations lack clarity in their strategy and execution. The organization's leaders may feel that the strategy and direction is clear. This is partly because they have direct input into what that mission is and spend a lot of time working with other leaders to define the strategy and direction for their senior executives. But they often forget to communicate this to their teams that they depend on to execute and realize that strategy. This sometimes occurs because the leadership is constantly revising the strategy or vital elements of it and therefore remain in a perpetual "draft" mode where nothing is officially blessed and therefore nothing can be rolled out to the rest of the members of the organization. It is vital that there be frequent moments throughout the year where the leadership is forced to address strategy and direction including calling out changes that are being made and clearly defining the challenge to their teams. At IBM we typically have "All Hands" quarterly calls where our senior executive team addressing the rest of the organization to communicate the goal and challenge. This forces the leadership to move beyond a "draft mode" paralysis.

I recommend periodically assessing the 3 C's and consider whether you are adopting incremental changes, challenging your team to achieve a goal or target, and are you clearly communicating these changes and challenges to all members of the organization.


The 3 P's of Status

Communicating status is ubiquitous to all professions. Being able to relay the current progress (or in some cases lack of progress) to your leadership team is an essential aspect of any project. Status is recursive and cyclical. Typically gathered from the bottom-up and rolled up each time shedding operational details and focusing more on business objectives set by executive leadership.

Any project leader knows how much time is consumed preparing and communicating status. Therefore, I thought it only made sense to spend some time dissecting the key aspects of status. And it turns out that all good status presentations include 3 fundamental ingredients and the subject matter or domain does not make a difference. Whether the subject matter is very technically focused (engineering specific objectives) or high-level business focused (executive business objectives and strategy) they all must address these 3 aspects:

  1. Progress
  2. Problems
  3. Plan

Throughout my career at IBM, I've had the opportunity to lead all sorts of projects. Most of these projects were new product development (aka green field engineering) others were focused on saving a failing product. No matter the project, leadership responsibilities always included the presentation of status for my projects to my management and executive leadership team.

A few years back, while leading an initiative to improve the scalability and performance of a critical virtualization management component for IBM's new cloud systems, we found ourselves in a firestorm of problems. Our component was directly blamed for the overall offering not being competitive and at times not functional. These issues were holding up our launch of the new IBM Cloud and delaying the distribution of a new series of Cloud-ready servers. I was tasked with leading an initiative to greatly enhance the scalability and performance of our virtualization stack across multiple supported platforms.

Any one that knows anything about "cloud computing" knows that virtualization is a critical foundation and that it has an inherent requirement to scale and perform. After only two days on the job, my 2nd line manager (a program manager) called my name during a weekly operations review meeting and requested status for this work. He wanted to hear that we had a plan of attack and a handle on the problem. But we did not understand the problem and I had not yet formulated a plan. Not good! The next few minutes were very intense. I thought to myself, does he not know that we just started this initiative 2 days ago?!? Of course I did not say that aloud and instead proceeded to do my best to describe our initial team activity and some initial next steps. I was scolded for not having this information clearly laid out in a chart deck. This was a moment of awakening for me and a lesson learned, clearly there was a lot of pressure from the top and I had underestimated the speed of execution my management expected on this issue.

This marked very beginning of a long and stressful endeavor that would span 2 years of very challenging technical work. During this period I learned a lot about status delivery, especially since I was having to present weekly to my direct management on Monday's, to our middle management (directors) on Tuesday's, and to our VP (4th and 5th line managers) on Thursday. I learned that how you convey status was critical and spent plenty of time tweaking our messaging, the structural format, and even the styling and font of our chart deck. I realized that this was both a craft and an art that could be improved with practice and experience.

I quickly had to beef up my presentation skills and my "chartsmanship". It was while on the phone one night with my 2nd line manager, preparing for a presentation the next morning to our executive team when he shared some insight me: "The 3 P's".

Suddenly it all fell into place. It was one of those things that is incredibly intuitive when you first hear it, and you may have already known it subconsciously, but to clearly define it and label it unlocked a key pattern that I would leverage for every status presentation from then on. Although the substance and detail of the status varied depending on the audience, the approach always followed this pattern. It directly answers the core questions that any manager or executive wants to hear in order to make key decisions on resources or priorities. I hope you find this status pattern helpful.


What is NLP?

What is Natural Language Processing (NLP) ?

From early days of computing when computers were just big calculators used to assist companies with computing payroll or tracking sales, computers were always used to process structured data. Structured data is data that is well formed with a consistent structure that a computer can be programmed to parse and perform operations on. This means that each piece of data follows a specific format such as a spreadsheet with a lot of numbers organized in specific columns and rows. Computers are optimized to perform millions of arithmetic operations in a very short period of time. This makes it very useful when you are trying to compute sales figures or adding up credit card charges.

But computers and more specifically the computing paradigm we use is not optimized to process unstructured information. Unstructured information is information that is not well formed and has an unpredictable format. A perfect example of this is human language. Human language is very difficult to process because it has a lot of ambiguity and variance to communicate information. The same sentence can sometimes mean multiple things and therefore a human has to use other information as context to understand what was being communicated. That's kind of like sending the computer only part of the information and it having to figure out the rest of the information based on who, what, where, and why the information is being sent.

You may wonder: “but I can write articles and essays using a computer and it does a great job of letting me edit, save, print, and even correct my spelling and grammar. But there is a BIG difference between allowing you to send information and actually understanding information.

Just because a computer program lets you type in human language and let's you save it and even can look-up misspelled words in a dictionary doesn't mean it understands what you have typed. Do you know of a program that gives you an opinion about how good your paper was? or ideas on additional paragraphs for your essay ? You don't know of one! That's because computers struggle greatly with trying to understand human language.

Unlike computer languages (such as a programming language like Java or machine language such as binary 1's and 0's) human language is very complicated and filled with lots of literary devices that further complicate matters. Computer languages are designed to be deterministic and grammatically complete, meaning that two identical programs always yield the same result. Where as in human language the same words can mean two different things depending on the context. A classic example of this is the phrase “I shot an elephant wearing pink pajamas”.

Who was wearing pajamas? the elephant ? or the person who shot the elephant? Was the elephant shot with a gun? Or with a camera?

This is an example of ambiguity in language. For a computer, it is very difficult because not all the information is provided to answer these questions.

Take another example: He flew over here. Did the person actually fly to some place ? maybe on an airplane? Or were they running or driving very fast ? These kinds of literary devices are difficult for a computer program to decipher.

Natural Language Processing is a field of computer science that combines computer algorithms with the study of linguistics to help provide the illusion of understanding human language. Notice that I did not say that a computer actually natively understands language but through sophisticated algorithms and statistical probability it can do a pretty good job of giving that illusion.

More posts to come on NLP, stay tuned ...


The ideal team player

The most crucial decision any organization makes is the selection of their team members. In fact, start-up companies are funded by VC purely based on their team and talent. The reason for this is because their initial start-up "idea" is rarely the one that is delivered. Therefore, hiring a talented team that can work together effectively is vital. Many company recruiters obsess about hiring the smartest and brightest talent. Their interviewing process is designed to vet those hard technical skills. But this approach misses other equally important dimensions of a candidate. I have seen my company over the years hire talent with a myopic focus on technical skill and academic acumen only to later see those new hires eventually be terminated or resign.

In his book "The Team Player", Patrick Lencioni presents a fictional story of a construction company with a new CEO in the midst of major growth and needing to staff a large number workers. The new CEO is faced with a major challenge. His company has a high right of attrition and his work culture is broken. The companies departments are at war with each other. Their is infighting amongst their leadership. This is when the CEO begins to dig into what are the characteristics that make up the ideal team player.

Lencioni ultimately defines the ideal team player as being: Humble, Hungry, and Smart. Using these 3 virtues, you can assess whether a new hire candidate will be a good fit into your organization's culture or whether an existing employee is exhibiting these virtues.

A hiring process should aim to asses these 3 virtues during the interview process through specific questions that focus on their past experiences in dealing with team members or how they respond to hypothetical scenarios.

Just think for a moment of colleagues you enjoy working with and those you don't. Now think about these 3 virtues and I am sure you'll find that the ones you enjoy working with have 2 or more of these and the ones you dislike working with lack 2 or more of these.

Lencioni proceeds to describe archetypes of the sort of personality that results from lacking one or more of these characteristics. Here is a table that labels these personalities based on the virtues they posses:

Humble Hungry Smart
The Pawn
provides no value to team
Yes No No
Bulldozer
quick destroyers of team
No Yes No
Charmers
little interest in long term well being of team
No No Yes
Accidental Mess Maker
lack of understanding of their words and actions results in interpersonal problems
Yes Yes No
Lovable Slackers
only do what they are asked and rarely take on more; limited passion
Yes No Yes
Skillful Politician
works hard, but only for themselves
No Yes Yes
For example, lacking virtues humble and smart, results in what he calls the "bulldozer". Lacking only hunger, results in the personality archetype he calls the "loveable slacker".

Remember, no one is perfect, Lencioni reminds us that these are not permanent characteristics and how well any individual exhibits these may change over time and may be impacted by stress and other personal life circumstances. An individual can always improve and be coached on any of these areas, an ideal team player simply requires less coaching to exhibit these virtues.

The ideal team player is the one that consistently demonstrates all three of these virtues over an extended period of time.


A culture that innovates

Every company or work environment has its own culture. Most of the time the true culture is not what is listed on the company's values statement or HR marketing material. That disconnect may not necessary be a bad thing. But it means that the culture has become its own living being. This means it will continue to shift and change over time and eventually may no longer reflect the vision or contain the aspects that help support a vibrant and motivated work force.

Most companies function successfully while business climate is good. When the business climate becomes more challenging, such as during a recession, cultural issues within an organization become much more visible to every one in the company. Culture is something that needs to be preserved in all actions and decisions taken. If a company is not purposefully defining what the culture should be, then its opting to let it grow on its own. Sometimes it can evolve into something good, but often times it can grow and spread and grow like mutated cells. The best way to ensure your company has the culture its founders and visionaries intended is to actually define it and instill it in every decision and action that is taken.

In my company, we require a culture that is geared towards innovation. As a technology creator, it is vital that our company continue to innovate and reinvent itself cyclically. Here is my attempt at defining the key aspects of a "culture for innovators":

  1. Motivate workers by empowering them with a sense of ownership , challenging work, and acknowledging and recognizing good work.
  2. Reduce controls and procedural obstacles by encouraging community ownership of components, discouraging individual component ownership, foster a culture of technical mentorship and cross training by allowing developers to cross roles and functional areas.
  3. Improve clarity and communication by eliminating "need to know" mentality and encouraging inquisitiveness and curiosity while establishing clear roles and responsibilities for assignments.

The culture of a company is what gives it identity and what allows it to attract and retain talent. Unfortunately very little focus is given today on this fundamental building block of any organization.

Stay tuned for more posts on culture...


MMF's revisited with analogy

In the past I've discussed the importance of defining work as a collection of minimally marketable features or MMF's. The general idea of using MMF's is to define new enhancements into a system as a small well-defined and manageable increment to the existing system. The "minimally marketable" aspect means that you want to build the smallest increment possible but not too small where it fails to deliver any tangible value to the users of the system.

While doing some demolition work in my home to prepare for a remodeling project, an analogy came to mind to describe the advantages of an MMF. Tearing down a wall does not have a whole lot of science to it, with a crowbar and a sledge hammer, the wall will soon be lying on the ground in pieces. At some point in the demolition process you'll notice that you can't make further progress until you take out the debris. And taking out this garbage typically requires a shovel and some industrial garbage bags. It's during this process of filling up and taking out these garbage bags that I realized this was the perfect analogy for the advantages of MMF's. If I tried to shovel too much garbage into a bag, the bag became very heavy and very difficult to move. So even though I appeared to be saving time and material in filling up a bag to its max capacity, I was actually going to lose a lot more time later trying to move that bag. If I filled the bag too much, I simply couldn't move it or worse the bag would break while attempting to move it and I would have add more work to clean-up the spill and start over.

But if I instead filled the bag to a manageable amount, it was then very easy to move the bag and avoid any spills. Even though I ended up having to repeat the process of getting a new bag, filling it, and then moving it more times, overall, the process was more efficient consuming less overall time and avoided spills.

Trying to jam pack a large set of features and enhancements into a release cycle is like filling up those bags to the max, you are left with a very heavy bag that is very difficult to move and at risk of failing. Instead, defining work as a manageable stream of MMF's and increasing the number of release cycles will ensure in less changes to validate per cycle and therefore higher quality and stability.


Scarcity

Scarcity has a profound impact on the way we live our lives. Scarcity affects our cognitive abilities to analyze, reason, and make decisions everyday. In their book Scarcity, authors Sharfir and Mullainathan discuss how different forms of scarcity can explain why poor people tend to remain poor and why busy people tend to remain busy. We experience scarcity in a few ways. Time scarcity, when we are short on time such as when we're working against a deadline or due date. Financial scarcity, when we're constrained by a budget or lack of money when having to make a trade-off on whether to purchase a new toy or pay off a credit card debt. Scarcity acts like a tax on what the authors call our "cognitive bandwidth". When presented with choices, having a reduced cognitive bandwidth impacts our "fluid intelligence" and therefore can lead to poor choices. Deciding to buy a brand new television or subscribe to some new service that you do not need or electing to forego on health insurance when there is a likely chance it may be needed. These are all commonplace scenarios that we encounter everyday in our lives. And the decisions that we make at one point in time may not have been the same at a different time when scarcity was affecting us differently.

When you are extremely short on time, you make decisions to skip out on your child's sports game in order to get more time to prepare for an important presentation you have the following morning. When you are running on a very tight budget, you may choose to skip on eating in a restaurant that weekend in order to ensure you have enough money to pay your rent or car payment. Sharfir and Mullainathan postulate that everyone's intelligence is limited but it is also fluid, meaning that someone that is very intelligent when impacted by scarcity can make very foolish decisions. This explains why sometimes very intelligent people are not successful. Maybe they are always too busy and don't have enough time to complete their work or projects. Maybe they are very poor and spend most all of their time on finding food. The authors demonstrated through many psychological experiments how the same person could make a smart choice and then make a very foolish choice when presented with severe scarcity constraints. This is because their fluid intelligence has been limited by a severe tax on their cognitive bandwidth.

For those that suffer from time scarcity (i.e. always busy, not enough time), scarcity can explain how we are always stuck in a perpetual cycle of trying to catch up. When stuck in that mode, you end up having to make difficult decisions about where to spend your available time. Much like money, you are limited in how much time you have and are forced to constantly make choices on where to spend your available time. This almost always means you spend your time on "urgent" tasks that require your immediate attention and you spend very little time on "important" tasks that are not urgent but have a large impact on your life or success.

Stay tuned ... more posts to come on scarcity.


Cognitive Decision Support Systems

Decision support systems and expert systems assist human experts by making recommendations or determinations given a configured or programmed set of business rules or policy criteria. These systems are an integral part of business process management (BPM) by improving the accuracy and efficiency for processing requests.

Every business or organization makes decisions. These decisions are typically based on business guidelines or policies that help govern and ensure their business process is consistently applied to transactions or requests. These business rules are typically managed and updated by business analysts and periodically reviewed to ensure they reflect the business strategy and model of the organization and help enforce specific business behavior and outcomes.

Every organization also retains subject matter experts (SME) or trained personnel that understand, implement, and enforce these business rules as part of their work within the business process workflow. Small organizations that do not make large numbers of decisions are able to train and support SME staff to cover decision volume. Large organizations, however, such as insurance companies with millions of customers or members are required to make hundreds of thousands of decisions per day. This requires a large SME staff that needs to be trained, supported, and managed to ensure consistency and fidelity in the overall BPM.

In order to optimize and manage complexity, large organization turn to sophisticated software systems to help enforce business rules. These systems are programmed by software engineers working with their business analysts to create rules in software to assist with automation of request processing. These rules are frequently updated, such as an insurance company updating its medical policy on a periodic basis in response to new comobidity studies or revisions to clinical guidelines. This requires an on-going development of the system and thus requires a resident development staff or long term relationship with external consulting firm.


What is Cognitive Computing?

I often hear folks talking about "cognitive" computing as if it were a single system or mechanism that can solve all your difficult problems. Cognitive is really just a solution pattern or a paradigm for solving certain kinds of problems. IBM coined the phrase "cognitive computing" after its debut of the Watson Jeopardy system that famously defeated the top two Jeopardy champions of all time. But Watson is not a singular system.

Watson and cognitive computing is considered a new era of computing that integrates Natural Language Processing (NLP), Machine Learning (ML), and Statistical Modeling to solve certain kinds of problems that typically require a human that understands written language and applies domain expertise to arrive at a conclusion. The conclusion can be a decision or a recommendation or simply presenting supporting evidence or summarization to assist the human. Another aspect of cognitive deals with machine learning (ML). ML is used to help induce a dynamic model that can help classify, cluster, or predict an outcome. ML typically is used along side a corpus of content and some ground truth that includes knowledge that has been curated by a human to prime the model and allow it to extrapolate and identify patterns and correlations that can be used for predicting an answer or outcome on new and previously unseen questions.


Reflections on Agile Transformation

After a couple of years since the major agile development transformation across IBM, I have taken the opportunity in the spirit of being agile to perform some reflection on the agile methods and their implementation on the several projects that I have been directly involved with. In doing so one can quickly gather several areas where current methods simply do not address recurring issues affecting most agile projects at IBM. This led me to search for answers to these common problems. In this quest for additional understanding and insight to solve some of these issues I returned back to Lean software development material presented early on during the education blitz to transform IBM from a plan-driven (waterfall) process to an iterative process influenced by Agile specifically Scrum (with some XP) implementation of Agile rebranded into the IBM Agile process.

During this revisit of Lean I was hoping to unlock missed principles and methods that could help solve issues like:

  • Chronic slippage of Stories past iteration
  • Backlog of Stories waiting for test
  • Sprint-story containment (doability)
  • Lack of leading indicators to help with estimation and story delivery targets
  • During my role as Scrum Master for several Agile projects I constantly searched for solutions to the above issues. Some of the methods I employed:

  • Focus on starting only a few stories rather than allowing many stories to start and therefore resulting in a long delay before any stories are ready for Acceptance testing
  • Required team-demos of functionality to demonstrate that function is "demoable" and that sufficient unit testing has been performed by development with the added benefit of demonstrating to testers the expected behavior that they were to later test.
  • Used "ideal" days as a value that team members could easily quantify and conceptualize when estimating stories in order to better project whether a story fits in the sprint length and the expected time of delivery.
Many of the above solutions were effective in mitigating the issues they targeted but there was still a lot of sausage-making behind the scenes. In search of a proverbial "Agile 2.0" that would provide a natural flow composed of systematic and easily reproducible methods that could universally be adopted by IBM agile projects to cope with chronic pain points, I discovered "Kanban".


What is Kanban?

Kanban originates from the Japanese word for signboard or billboard. Kanban began as a manufacturing system to improve the efficiency of a production line and has since been adapted to software development. An example of a Kanban system is the marking on the side of a sandwich box or on a soft drink cup. This is a system for orchestrating an assembly line and finds its origin in Toyota where lean manufacturing principles originated. Kanban in the world of software engineering is a derivative of lean and agile combining those principles and including suggested methods and indicators based on a process flow specifically designed to solve many of the issues faced with what I call "Agile 1.0".

Kanban is a combination of lean and agile principles that emphasize the usage of a continuous development flow that is driven by pulling value through the development flow captured through a defined workflow with an established team capacity for each stage of the flow. Kanban uses constraints called limits to Work in Process (WIP) based on an agreed to team capacity throughout the various stages of the development flow. Kanban provides a twist to the typical Scrum implementation of Agile by introducing Lean software engineering principles and altering traditional Scrum project structures to tackle many of the chronic issues faced by most Agile project teams implementing Scrum-agile process. Kanban is not considered a development process but rather a set of principles derived from Lean and Agile. The implementation of Kanban is not prescriptive and is meant to influence the development process and its traditional adoption of Agile. In some area Kanban challenges so-called agile implementations and demonstrates the value of being "agile" vs "Agile". Let us examine the key principles and methods that Kanban promotes and demonstrate how modifying existing agile project structures and methods can lead to an Agile 2.0 evolution founded on Kanban.

Rather than focusing on Agile which may lead to being successful, Kanban focuses on becoming successful, which may lead to being agile.
David Joyce


Work in Process (WIP) and Capacity limits

A key indicator in Kanban is WIP and corresponding capacity limits. Setting limits to how much a particular stage of development can contain is crucial to being able to commit to dates for when a feature (or user story) can be delivered. These limits on WIP help compute throughput and provide high fidelity delivery estimates based on the present working environment. Increasing the WIP in the development flow results in a delay of all the items (features or user stories) under process. WIP also provides leading indicators that can be used when committing to the delivery of a feature.

Typical Agile projects rely on Velocity which is a trailing indicator. Velocity is a measurement of what the team was able to complete last sprint and therefore is not a good measurement especially for situations where teams are constantly changing members (new and existing members or role changes). Recall those mandatory disclaimers required by the SEC for Mutual Fund advertisements: "Past Performance does not guarantee future results". The same applies for software development. A more valuable type of measurement is a leading indicator that can help signal when adjusts are immediately required. An example of a leading indicator is WIP exceeding the Capacity limits. When the amount of work (such as the number of stories) exceeds the predefined capacity for that stage (such as too many stories waiting to be tested) resources are temporarily reallocated from up-stream stages (such as development) to provide assistance to the down-stream stage (such as test) that was exceeding its limits. Once the bottleneck is cleared the borrowed resource can return to their original assignment. Using this WIP Capacity limit as a leading indicator you are immediately able to deal with a situation real-time and do not have to wait for stories to slip.


Pull don't Push!

Kanban approaches the development process from a value "pull" rather than traditional "push" methods. This means that the work is pulled from downstream stages rather than being pushed from upstream. An example of this is a simple development flow where there is a design, development, and test stage. The test stage would be considered downstream to development stage and development would be considered downstream to design stage. Rather than allowing design to push work into development stages and development to push work into test stages, the pull method would require the test stage to pull work from the development stage and in turn the development stage to pull work from the design stage.

When this method is combined with the use of capacity limits on WIP, guards against the common scenario where development takes off at one pace and test is left with a large backlog of items, of which many will produce defects that need to be addressed. By forcing the capacity limits once test has reached it's capacity it cannot pull any additional work from development and therefore development will eventually reach its capacity limit and will have to stop adding new work. The idea behind this is that the team is immediately made aware of a "traffic jam" that has been created and the team can rededicate members to help resolve the situation. Maybe the situation is caused by external dependencies not being met or a lack of resources to perform at the set capacity limits at which then the capacity can be adjusted or new resources can be assigned to the stage that is operating at capacity. This approach builds in some slack that each stage can have for its work but can immediately expose when the team resources are not balanced correctly or when capacity is not realistic or achievable. The pull approach is a key principle of Kanban and its implementation can yield a more smoothly running development flow with the opportunity to fine tune WIP capacity limits and in turn immediately impact new commitment delivery estimates based on the current dynamics and not the past.

Kanban (pull) changes the underlying paradigm from a project-centric flow to flow and value-stream centric.

Before discussing the value pull approach it is important to understand another concept of Kanban called the Minimal Marketable Feature (MMF). MMF is the smallest size a feature can be and still be consumable and valuable to the customer.


Enumerate Work not people

Kanban also provides a twist to how stand-up (or daily) team meetings can be held. Kanban would argue for the enumeration of work rather than the enumeration of people. Most agile projects have some sort of quick daily meetings where each developer talks about the work they are doing and any blocking issues that they are facing. Instead of enumerating through the team members going through the work in the development flow. This focuses the attention on the development flow and the items under each stage. This approach also allows for a larger size team without having to break up into multiple tiers of scrums (scrum of scrums). With this approach the meeting coordinator (scrum master) would walk through the work in the various development flow stages from a downstream to upstream fashion allowing for updates to be immediately made. This approach complements the pull method and focuses the team on identifying exceptional situations that should be addressed (blockers).


Continuous stream vs. Timeboxed iterations

Another typical project structure for agile teams (especially Scrum teams) is the usage of timeboxing iterations. The adoption of timebox iteration is inherently in conflict with delivering features (or user stories) since it requires that they be resized to fit within an iteration length. Interestingly enough a timebox iteration actually are not a principle mentioned in the Agile Manifesto. Since the adoption of iterations is so pervasive across Agile projects one would be led into thinking that it somehow is a principle or method required by Agile. But this is not the case. Kanban does not explicitly argue against the use of iteration but it does make a good case for why they are not necessary.

Timeboxes impose an arbitrary process artifact on the customer. No customer ever comes asking for "4 weeks" worth of code. They want features and problems solved.


Decoupled Planning: Planning On-demand

A central principle of Kanban rests in the concept of continuous flow. The natural flow that exists in the development process is often interrupt Rather than having Iteration Planning sessions before each iteration, Kanban instead encourages the use of a Backlog threshold that triggers a planning session when the number of items in the backlog falls below that threshold. This encourages planning meetings as often as needed but no more. This also allows for the inclusion of new features at any point throughout development. Since planning is decoupled from development it can occur in parallel and as often as needed dictated by the amount of work in the backlog. Because of this decoupled (on-demand) planning the need for Iterations is less important and in fact becomes arbitrary and no longer serves a purpose since planning can occur sporadically throughout development.

A Kanban approach includes all the elements of Scrum (including stand ups as well), but decouples them from all being tied to the Sprint. This can create a more natural process.
Steve Freeman


Story template 2.0

The conventional format for a User Story is: "As a [user], I want to [Goal], so that I can [business value]." This format becomes very monotonous especially if you are using a tool or list to track your user stories. After collecting a large number of these it becomes very difficult and impractical to quickly identify and locate a specific User Story. This format also focuses on an end-user role where at times the end user is in fact another actor such as a system or vendor leveraging a service offering. Instead, an alternative format that focuses on the business value and goal while still preserving perspective of the user:

In order to [business value], [stakeholder / user] needs [goal]
This format is easier to work with and more efficient to group and locate specific User Stories of interest.


System Scalability and Performance

Performance considerations are often not cited in most application design documentation. The fact is that performance is not considered during most application test phases. This is due to the overall emphasis on functionality of a system by most design methodologies. Most development processes tend to not consider performance until after releasing deliverables. Many times overlooking performance requirements are due to the constraints on time and money to architect, design, and implement a system. Unfortunately many of the performance issues found after production become very costly and timely to detected and correct.

Many of the costly issues discovered after delivery of a system could have been remedied in design or implementation if they were detected early. Just like any human illness, early detection is the key to a healthy and functional system that more often then not averts the dangers and cost of surgery. By integrating performance and scalability analysis within the development process these issues are detected and corrected. In fact issues found in architecture are much more costly to repair than issues found in design. Issues found in design are in turn more costly to repair than issues found in implementation. Ideally if scalability and performance requirements are identified during Requirement Elicitation and captured in Non-Functional Requirements for the system, this will lead to an architecture that addresses these issues early. Rather than creating an architecture that suits the functional requirements of a system and later prove to not work for non-functional requirements like performance and scalability, it is best to test the architecture early. In order to remedy many of the costly and timely issues discovered after the delivery of a system is by including and integrating performance considerations in the development process. Performance should not be left as an after thought, especially for systems that must scale or produce timely and responsive results. This is especially the case for many systems management applications that are expected to handle an extensive number of events and transactions.