Home Interviews Few minutes with Alexander Stein

Few minutes with Alexander Stein

Dr. Alexander Stein

Dr. Alexander Stein is the founder of Dolus Advisors, a consultancy providing actionable predictive insight in human risk. The company helps senior corporate leaders and boards understand and resolve human factor issues in organizational ethics, culture, compliance, and governance, and to proactively detect, mitigate, and respond to insider threats, executive misconduct, and white-collar and cyber malfeasance risks. We talked to him regarding the menace of insider threats and how companies can more effectively mitigate them.

You’ve had a very diverse career. Please tell us about that journey so far.

My graduate degrees and training are in psychoanalysis, and I started my career as a psychoanalyst in clinical practice—treating patients, publishing scholarly papers in peer review psychoanalytic journals, teaching, and speaking at conferences. I found myself both gratified and frustrated by that career. Then I was directly affected by the events of September 11, 2001, and that deeply altered my view of what was going on in the world, and catalyzed a shift in my thinking about how to respond, both personally and professionally. From that experience, I decided to take my expertise to a broader marketplace, to deploy psychoanalysis rather than practice it.

I repositioned myself to work with business leaders in their organizations, not just treat individual patients, to have an influence and beneficial impact with people who themselves have tremendous influence and responsibility. I began writing a monthly column for Fortune Small Business magazine on the psychology of leadership and entrepreneurship (currently, I’m a contributor to Forbes writing on the psychology of leadership and misbehavior in business), and launched a consulting practice advising senior business leaders, entrepreneurs, and boards. This advisory work combines clinical insight with practical business strategies to address issues involving people, culture, ethical decision-making and other aspects of corporate and organizational life with complex psychological underpinnings. It remains a core part of my business practice to this day.

Not long afterwards, I was approached by a prominent fraud and corruption litigator who asked for my help in understanding and applying a more psychologically sophisticated framework to bring fraudsters to book and recover the assets they’d stolen and sequestered in serious fraud and grand corruption matters.

I joined forces with members of ICC-FraudNet, a global network of elite fraud and asset recovery professionals, working in multidisciplinary teams as an embedded human factors expert in large-scale international fraud cases.  This was my initial entry into the malfeasance and misconduct space, which is enormous. From a social justice-humanitarian perspective, for all the people, companies, and communities victimized by it, fraud is unfortunately a growth industry. But as an entrepreneur uniquely positioned to help and add value, that experience was and continues to be important. First, I seized the opportunity to innovate and refine Psychodynamic Intelligence Analysis, my gap-leaping architecture and strategic methodology for addressing fraud. I also became alert to the massive problem of institutional and enterprise risk mitigation and defense, recognizing it as an area with significant opportunities distinct from those in fraud and asset recovery cases which are always post-facto and reactive. Organizations are, or ought to be, focused on and better prepared for trying to prevent bad things from happening—malicious incident defense, deterrence, and mitigation. Of course, post-crisis response and recovery is critical for those who will inevitably require it, and it remains an important function in my offerings. But I also began purposefully expanding my practice into an area I’ve developed which I call Human Risk Forecasting: specialist expertise-driven proactive address of the human element and organizational psychodynamics in white-collar misconduct and cybersecurity.

Do you think that cybersecurity is more about humans than machines?

The answer is yes. At core, cybersecurity is about people, not technology. The underpinning drivers and all the other so-called “soft” components—what I call the “shadow risks”—that give rise to malicious incidents are deeply embedded parts of the human condition. The introduction of contemporary technology only provides different mechanisms and vehicles—faster and from distance—for perpetrating the same sorts of nefarious shenanigans—dishonesty, manipulation, espionage, surveillance, theft, extortion, misinformation, unscrupulous practices—that people have inflicted on each other since the dawn of civilization. It’s a mistake for business leaders to consider cybersecurity issues to be more weighted to the technical and technological than to the human psycho-social dimensions.

You head a firm that employs expertise in human risk forecasting, and one of the biggest risks that organizations face now is insider threat. What do you think companies can do to counter or better protect them against this? What is your company doing in that regard?

The insider threat is of course a serious problem for every company. Insider threats, first of all, are never going to be absolutely neutralized. But to take ground, there needs to be a broadening of perspectives on what constitutes both an insider and a threat. Human threats won’t be effectively mitigated without involving a more sophisticated understanding of human psychology, particularly the underpinning drivers of intent and malicious action. The human element in cybercrime is typically categorized as either a bad actor, a weak link, or an agent of prevention and repair. Those categories suit—they’re not entirely wrong—but they’re insufficient and oversimplified. To address insider threats, business leaders really need to understand the depth, substance, and impact of something which is finally being recognized more broadly—cybersecurity is a holistic business and enterprise issue. It isn’t confined just to IT or info-sec.

One of the only ways to mitigate the potential threats from insider malice is to create a culture of principled, moral leadership, and ethical governance and business practices. Of course, info-sec, physical security, and all other forms of technological securitization are crucial. But genuinely potent, effective cybersecurity, especially against negligent and malicious insiders—who have already compromised the perimeter no matter how well fortified it supposedly is—is unachievable if the company’s focus is excessively technical. It must involve the interdisciplinary collaboration of GRC—governance, risk, and compliance—not just each of those disjointedly, with business line heads across the enterprise and a CISO all of whom are excellent leaders in their own rights not just knowledgeable subject matter experts, and who all have authorized channels to each other and to the organization’s upper echelon, including an informed and involved board. Also needed are sophisticated human factor programs involving security awareness and response education and training, and expertly designed and implemented mechanisms for identifying, airing-out and resolving the kinds of people issues that if overlooked or ignored can ignite a crisis seemingly without warning.

Do you think cybersecurity education of employees plays a role in making an organization safe from any kind of insider threats?

Absolutely, yes. That being said, the type, depth, scope, and sophistication of training and education matters. There are too many organizations that consider various forms of perfunctory training and the periodic issuance of memoranda that update their workforce on policies and procedures to be satisfactory. Or they’ll offer fun and engaging training videos (which is certainly preferable to threatening workers with punitive action if they make a mistake). But these are not sufficient.  It really does require a more robust understanding of what happens in human ecosystems, how and why people do the things that they do, not just giving them guardrails to try to prevent them from doing what they do. Behavior cannot be controlled or legislated even if the company—and hopefully not!—is run as a totalitarian dictatorship. But even so, malicious dissent is just waiting to explode.

Enterprises need to prepare their workforce through policies, procedures, and training. But also through authentic example; hypocrisy from managers and senior leadership fatally undermines all other measures. Equally critical are building fundamental shock absorbers into policies, culture, and operations—like tall buildings and suspension bridges which are designed to sway with powerful forces, not just attempt to rigidly withstand them. Companies have to be able to contend with how people actually are, and not just continually insist they be more ideal or aspirational versions of themselves so that problems can be completely prevented. Just to put a tag on that, the short answer is yes, training is very important. But what will truly make a difference is how that training is planned, constructed, and deployed, and how it’s specifically geared to that organization’s culture, population, and business needs, not just in accordance with industry best practices, standards, or regulatory insistence.

A number of C-level executives have repeatedly mentioned that insider threats are often the consequence of bad hiring decisions. According to you, what are the recruiters  doing wrong when hiring cybersecurity talent, especially at a time when a number of companies are focusing on several artificial interventions to get the hiring done?

You’ve expanded the question considerably by bringing in AI and so-called intelligent agents to the process. Let me leave that to the side for the moment and say that in some respects, you already started answering the question by how you asked it: any organization that looks to fill a critical role with somebody who isn’t committed and dedicated to that role, but is looking at it opportunistically, is already undermining the exercise. You could say that there’s something actually self-destructive about hiring somebody to perform a critical function who isn’t qualified to perform in that role at the highest standards. Any organization that does that is choosing to hire a potential insider threat! The damage will come, if not by outright malice then through inadequacy.

What can organizations do? They can apply stringently high standards of excellence to everything that they do. I understand that in certain jurisdictions or sectors, there may be a shortage of highly qualified talent—and the skills gap is a real problem, which the industry is trying to address—but at some point that’s only an excuse, not a valid reason, for not having the human capital the organization requires.

As to the second part of your question, whether AI and other emerging automation technologies are being deployed for hiring or other purposes, the industry is way over its skis. In other words, there’s a radical overestimation of what its capabilities are at this point in respect of both what’s actually needed and what it can do. I understand that intelligent technologies can process vast quantities of data and triangulate an analysis of data sets at a magnitude and velocity that human beings cannot. But in making assessments of people or other aspects of decision-making, even if we are still left having to deal with issues of bias, poor judgment and a host of other intrinsic and unavoidable human qualities and characteristics, there are many elements to the process which technology really is not able to take over and which should not be outsourced to it. My work with both technology companies which develop or sell and organizations looking to onboard and integrate automated technologies into their operations involves helping decision-makers more clearly strategize and mitigate the unintended consequences of misunderstanding the interface of the human and the technological.

One of aspects we discussed earlier is the skill gap. We are all aware that it exists, and a number of C-level executives agree that one of the reasons behind rising cyber threats is the lack of enough people to handle those threats. However, on the contrary, a recent survey asserted that 45% of companies have no cybersecurity managers in their company. Do you think we need more role models in the field?

Let me try to parse my answers to briefly touch everything you’re raising.

I am familiar with that study, and the statistics it lays out are sobering, but at the same time really not surprising. The issue of developing role models is enormously important, within an organization as well as across industries. However in itself, it is not going to close a skill gap. Developing talent is something that needs to begin early, and it includes issues relating to the social inequities that give rise to lack of access to technology and lack of Internet connectivity in low-income or rural areas. In other words, it needs to begin by developing more technological natives in young children, and allowing them to become knowledgeable, fluent, and interested in all of the issues that relate to cybertechnology as just another part of maturation and development, while at the same time, redressing some of the socio-economic issues that are factors in a lack of diversity in the talent pool. The later stages of education, in high school and in university, are equally important for further developing and strengthening interest, understanding, and fluency with technology and how to use it responsibly. But also, constructing curricula which go beyond education in technology and computing but also delves into the arts and humanities, teaches ethics, conflict resolution, and fair-play, and instills pro-social values.

Being a thought leader or some form of mentor will certainly be an enticement for people who are out in the workforce and hunting for jobs. But that’s not going to solve a workforce skills gap issue across the board. I would also add to that that there can be different types of influencers and leaders in a company besides just those credentialed or expert in a particular area of technology.  Leaders who demonstrate integrity, fairness, social awareness, and understand how to treat their workforce—with respect and as valuable contributors—and who understand how to motivate people to do their best are the kinds of roles models who will not only attract talented young professionals but will also indirectly mitigate insider threat risks.

Do you have any thoughts as we wrap up the interview?

First, thank you so very much for inviting me to be interviewed. I’m a specialist in people—mental architecture, human behavior and organizational psychodynamics, not a CISO or technologist. I’m delighted to have this opportunity to shine a light for your subscribers and readers on the critical but too often overlooked and misunderstood human factor aspects of cybersecurity.

The key takeaway I’ll emphasize first is how important it is for senior leaders and other decision-makers in enterprises to appreciate the complexity and ferocity of human psychology. The ways in which the human element is marginalized or subordinated under technology, front-line operational efficiencies, and dutiful but flat-footed compliance and business practices is detrimental, and creates clusters of otherwise avoidable unintended consequences, nearly all of which remain invisible and unconsidered—literally unknown unknowns—until they’re not.

Though most business leaders would agree that prevention is preferable to recovery, reactive crisis triage is the norm, and proactive pre-emption the exception. Many companies go about their business falsely assured, unaware of human rakes-in-the-grass and shadow risks to which they remain vulnerable. These include recognizing that good solutions start with accurately understanding the problem. The tendency to be reductive and simplistic in explaining the drivers of malicious decision-making and behavioral triggers, and then misdiagnosing the root causes, invariably leads to cyclical repetitions of the problem, and frequently the pseudo-solutions also leave the original issue intact and then create entirely new problems as well. Insider threats and other cybersecurity issues aren’t avoided or resolved with good plans alone—there’s a large delta between blueprint and reality regarding ideal conduct and actual human propensities. These and other blind-spots inevitably degrade functions intended to monitor, regulate, mitigate, control, or remediate various risks. Many other countermeasures are compromised by the resultant compounding technical and conceptual debt. And these, in turn, give rise to clusters of waterfall issues with which compliance, risk, info-sec officers, and other business line directors aren’t anticipating and are at a loss to address.

Companies that are truly serious about mitigating enterprise malfeasance—insider threats and other cybersecurity vulnerabilities—need to know that understanding the human element is central to that project.