B = f (P, E)



We often build talks and workshops with leaders interested in bringing a taste of behavioral science to their organizations. 

To effectively “nudge” anyone or “architect” any choices, we need to understand the basic patterns and pitfalls of human decision-making: We have limited brainpower. We take shortcuts to conserve that brainpower for certain decisions. Sometimes those shortcuts backfire, and in predictable ways.

This interactive talk explores a core handful of those shortcuts – “heuristics” – and predictable errors – “biases” – to provide participants with the psychological backbone for smarter decision-making, nudging, and choice architecting going forward.

Every choice has an architect (or two, or three, or more…) behind it, providing, ordering, and describing a set of options, as well as designing the overall experience of choosing among them. And every one of us plays the role of this architect, in some way or another, whether we realize it or not. How do we present our company’s health plan options? How do we write exams to avoid cheating? How do we ask our children to go to bed? How do we design our kitchen to discourage snacking? If we have employees or colleagues, students or children, spouses, family members, friends, or a future self, we have some ability to architect choice.

All our choices are architected – knowingly or not– but some are good, some are bad, and others are downright ugly. In this talk, we explore examples across industries and everyday experiences, educating, amusing, and engaging participants to become aware and empowered choice architects.

Organizations in the private and public sectors are increasingly exploring tools of choice architecture and “nudging” their employees, customers, and citizens towards one behavior or another. Is this fair? Is this ethical? How do we determine between right and wrong as we intentionally integrate these insights into our policies and procedures, pricing and politics?

This discussion reviews research on the psychology of fairness, especially as it relates to pricing, then expands to debate the organizational ethics of applying psychology and experimentation for private or public good. Optionally, the topic of ethics in incorporating algorithms and machines into policy and business can also be covered. Participants are left not with rigid answers, but with a flexible framework for coming to their own conclusions.

Our decision-making mistakes stem from two general sources: bias and noise. “Bias” is systematic deviation from what is accurate or true, while “noise” is random deviation from what is accurate or true. Both are prevalent across many types of decisions, and both are costly.

Why is it then, that we mostly hear talk about one and not the other? Why do organizations invest in rooting out bias, but barely mention the notion of noise? This talk explores the difference between bias and noise, how these errors manifest in organizations, and how to root them out. It further elaborates on how decision-making errors may be amplified in group settings, and how to set up simple operational guardrails to shrink the impact of these errors. Examples are taken from various industry contexts, and live exercises may demonstrate certain errors in action.

Participants tend to get the most out of this talk when attending with other team members from their organizations.

Experimentation is increasingly emerging as a tool for those outside the ivory tower, for marketers and product designers, operators and managers, CFOs and CEOs alike. If you want your organization to engage in data-driven decision-making, you need data, and experimentation enables you to develop data where none yet exists to tease apart causality and answer key business questions. When done well, experimentation can help organizations rigorously quantify impact, responsibly roll out initiatives, and share knowledge to avoid wasteful R&D redundancies and smarter growth.

The challenge? Experimentation done well is hard in academia and it’s even harder in business. Ineffective randomization, insufficient samples, messy real-world confounds, mining for (false) positives…the challenges to getting a clean and reliable measure of the “truth” seem endless. How can we structure our teams and operations to overcome these? How can we tame the many biases that crop up along the way? And how can our real-world research efforts – however messy – contribute strategic knowledge to our organizations and even fill in the gaps left by academic research?

This hands-on workshop is best suited to participants with less experience with experimentation, seeking structure and discipline in their experimentation practice, or interested in common mistakes and mitigation techniques.

We also craft bespoke experiences for corporate partners, consulting with their teams beforehand to weave in appropriate examples and stories. These often integrate psychology concepts around perception, value, and influence, as well as some of the core content on heuristics, biases, nudging and choice architecture. And the format varies by context: lectures, small or big interactive workshops, and panel discussions all work well with sufficient planning.

Bespoke talks work best for organizations able to share background materials, set up targeted interviews with subject matter experts inside the organization, and dedicate several hours to advance iteration of the talk.