The Ethics Trap of Going Straight Into Private Practice

The Ethics Trap of Going Straight Into Private Practice

This article orginally appears in US Insider on March 10, 2026.

Dayna Guido Warns the Next Ethics Crisis in Therapy Will Look Like “Confidence Without Supervision”

In nearly every profession shaped by expertise, there is a quiet apprenticeship period. Surgeons do not operate alone immediately after passing exams. Attorneys do not argue their most complex cases without mentorship. The gap between qualification and mastery is expected.

Therapy, however, is beginning to blur that line.

Across the country, newly licensed clinicians are moving directly into private practice at a pace that was unusual a decade ago. Independence is framed as success. Efficiency is framed as intelligence. And digital tools now promise guidance at a speed supervision cannot match.

Dayna Guido, a clinical social worker, educator, and longtime ethics leader, believes this convergence of incentives is setting the stage for a predictable professional reckoning. The next ethics crisis in therapy, she argues, will not look like malpractice born of bad intent. It will look like something far more subtle and far more common: confidence without supervision.

With more than 40 years in the field and over 2 decades of teaching graduate students, Guido has watched waves of change reshape mental health practice. What concerns her now is not technology itself. It is what happens when early-career clinicians are structurally encouraged to accelerate past the developmental stage where clinical judgment is formed.

And in therapy, judgment is not a feature. It is the profession.

The New Pipeline Into Private Practice

In recent years, Guido has observed a quiet shift. Interns finish their required hours, pass their exams, and move almost immediately into private practice. Ten years ago, many early-career clinicians expected to spend significant time inside agencies, hospitals, or group practices where supervision was embedded into the structure. Today, the cultural narrative is different.

Independence is framed as success. Private practice is framed as freedom. The language of entrepreneurship has replaced the language of apprenticeship.

Guido is careful not to moralize this shift. It is not a character flaw. It is a structural incentive problem. Graduate students carry debt. Agencies struggle with burnout and paperwork overload. Social media feeds are filled with therapists describing six-figure practices and flexible schedules.

The message is clear: skip the system. Go straight to independence.

What rarely makes the headline is what independence costs when it arrives too early.

Two Pressure Drivers: Professional Acceleration and AI Guidance

The acceleration is not coming only from outside the field. It is increasingly reinforced within it. Newly licensed therapists are encouraged to build their brand, fill their caseload, streamline documentation, and establish financial stability as quickly as possible. None of these goals are unethical. They are practical realities of modern practice.

The difficulty is that running a practice well is not the same as becoming a well-formed clinician. Financial independence can arrive quickly. Clinical judgment rarely does. Therapy is not a product perfected through efficiency. It is a discipline shaped over time, under supervision, in conversation with clinicians who have practiced longer and seen more.

AI adds another layer of pressure. It offers instant answers to complex questions. Emerging clinicians are already asking about diagnoses, interventions, documentation language, and even supervision scenarios. In many professional environments, speed and output are treated as indicators of competence. In therapy, they are not.

Guido draws a sharp distinction. AI can augment. It cannot form judgment. It can generate options. It cannot sit in a room and feel the silence between words. Supervision also provides attunement between the experienced professional and the unseasoned clinician. It provides an opportunity to discuss feelings such as imposter syndrome with a real, live human being who has encountered similar situations

Clinical harm is often delayed and difficult to detect. A therapist can believe they are helping while subtly missing what is most essential. The client may not recognize the misstep until much later. By then, the damage is relational.

Competence Develops in a Relationship

Supervision is frequently misunderstood as a bureaucratic requirement. Guido insists it is the opposite. It is the environment where clinical judgment is formed.

There is a difference between knowledge and discernment. Knowledge is what you learned in a textbook or lecture. Discernment is what you can safely do when the situation becomes unpredictable.

A clinician might know how to discuss boundaries. In supervision, they confront what happens when a client’s story intersects with their own unresolved history. A therapist might understand diagnostic criteria. In supervision, they learn to notice the pause before a client answers, the shift in tone, and the body language that contradicts the narrative.

Guido describes supervision as an interface between human beings. It is where competence becomes embodied. It is where overconfidence is tempered, and insecurity is contained. It is where ethical reflection becomes a habit rather than a reaction.

Without that relational container, early career clinicians often lack calibration. They may not recognize when a case exceeds their skill set. They may not see how subtle countertransference is shaping their decisions. They may not have a trusted colleague to consult when something feels off.

The risk is not only client harm. It is clinician burnout, preventable ethics complaints, and the quiet erosion of professional confidence.

The Coming Rise in Ethics Complaints

Guido anticipates a pattern that will be easy to miss at first and increasingly difficult to ignore over time. As clinicians enter private practice with minimal supervision, they often carry a complex caseload and financial pressure while lacking consistent consultation and developmental guidance. In that environment, small clinical misjudgments can compound.

When complaints eventually surface, the public rarely distinguishes between an under-supported new clinician and the profession as a whole. Trust in therapy is collective. If practice begins to appear rushed, formulaic, or overly dependent on automated systems, credibility does not erode only at the individual level. It weakens across the field.

Licensure boards, insurance providers, and regulatory bodies respond to patterns, not isolated anecdotes. What initially appears to be a personal miscalculation can become evidence of a systemic vulnerability. The consequences extend beyond the clinician involved, affecting professional reputation, client confidence, and institutional liability.

For this reason, Guido does not frame the issue as a generational critique. She frames it as a professional responsibility.

Supervised Growth as an Ethical Responsibility

Independence is not inherently virtuous if it is premature.

Guido believes supervised growth is a moral commitment to competence. The profession has an obligation to protect clients and emerging clinicians alike. That protection does not come from fear-based messaging. It comes from normalizing the idea that mastery takes time.

She asks graduate programs to say something out loud that is often implied but not emphasized: private practice is not an achievement unlocked by a master’s degree and licensure alone. It is a responsibility that demands consultation, mentorship, and ongoing accountability.

Schools can integrate business ethics into curricula so students understand both the financial and moral dimensions of practice ownership. Agencies can modernize supervision structures to make them feel developmental rather than punitive. Emerging clinicians can intentionally choose mentorship, even when it slows the path to independence.

AI tools can support documentation and organization. They should never replace human supervision, ethical reflection, or accountability.

An Ethical On-Ramp to Independence

Guido’s alternative is not prohibition. It is designed.

An ethical on-ramp into private practice would include structured supervision or consultation groups, formalized mentorship agreements, and transparent expectations about the scope of competence. It would treat supervision as a long-term investment rather than a temporary hurdle.

It would also acknowledge a deeper truth. Therapy is built on human connection. The profession cannot afford to outsource the development of that capacity to systems that lack a human nervous system, a body, or a conscience.

When Guido talks about AI, she does not sound alarmist. She sounds measured. She has lived through previous waves of technological change. Calculators did not eliminate mathematical thinking. But therapy is not arithmetic. It is relational attunement.

The next ethics crisis, she suggests, will not look dramatic at first. It will look efficient. It will look confident. It will look like independence was achieved quickly.

And then, quietly, it will look like supervision that never happened.

For a profession entrusted with the most intimate parts of human experience, that is a risk too significant to ignore.

Moral Deskilling: How Automation Is Quietly Weakening Ethical Judgment in Mental Health

Moral Deskilling: How Automation Is Quietly Weakening Ethical Judgment in Mental Health

This article orginially appeared in CEO World on January 21,2026,

Ethics rarely collapse in a dramatic moment. They erode. Quietly. Increment by increment. Often with good intentions.

In mental health care, ethical drift does not usually announce itself as misconduct or malpractice. It shows up as convenience. As relief. As the gentle outsourcing of decisions that once required pause, discomfort, and human deliberation. The rise of automation and AI has not created this drift, but it has accelerated it. What we are witnessing now is not simply a technological shift, but a cognitive one. A gradual weakening of ethical muscle memory. A phenomenon increasingly described as moral deskilling.

The greatest risk of AI in mental health is not overt misuse. It is ethical atrophy.

When Ethics Become Delegated

Ethical decision making is not the same as ethical delegation. One requires reflection, context, and relational awareness. The other requires trust in a system built by someone else, trained on data we did not curate, and optimized for efficiency rather than meaning.

In everyday clinical workflows, this delegation can feel benign. Documentation templates that suggest phrasing. Decision support tools that offer diagnoses, treatment ideas, or risk flags. Automated notes that reduce the burden of paperwork. Each tool promises safety, compliance, and speed. And often, they deliver.

But over time, reliance on these systems subtly reshapes how clinicians think. The act of asking, “What is the right thing to do here?” becomes replaced by, “What does the system recommend?” Ethical reasoning shifts from an internal process to an external consultation. Clinicians may feel more secure, even more compliant, while becoming less reflective.

According to Dayna Guido, a clinical social worker, educator, and longtime ethics leader, this shift is not hypothetical. She sees it emerging most clearly in over reliance on expediency. When tools make work faster, clinicians are less likely to check sources, question assumptions, or examine the ethical implications embedded in the output.

Ease becomes a proxy for accuracy. Certainty becomes a proxy for ethics.

The Comfort of Not Knowing

Automation offers something deeply appealing to professionals under strain. It reduces uncertainty. It narrows ambiguity. It provides answers when the emotional weight of not knowing feels unbearable. In mental health, uncertainty is not a flaw in the system. It is the system. Ethical practice requires sitting with complexity, holding competing values, and tolerating discomfort long enough to make a thoughtful choice. As Dayna Guido puts it, “When clinicians begin to rely on automation to think for them, they may feel safer, but they are actually exercising their ethical muscles less. Ethics is not a checklist or an output. It is a lived, internal process that has to be practiced in real time, with real people.” AI tools often remove that friction, offering clarity without context and confidence without attunement.

This is where ethical confusion can quietly take hold. When a system responds smoothly and convincingly, it feels ethical. The clinician experiences relief. The tension dissipates. But the absence of tension does not equal moral clarity. It often signals that a decision has bypassed the very process that gives it ethical weight.

Guido notes that clinicians frequently do not know where an automated recommendation originates. Who trained it. What standards shaped it. Whether it reflects legitimate clinical consensus or simply aggregated patterns. Unlike consulting a diagnostic manual or peer reviewed literature, automation rarely exposes its editorial process. The result is a false sense of safety.

Why Ethics Cannot Be Outsourced

Ethics cannot be fully codified because human situations are not static. They are relational, contextual, and embodied. Algorithms struggle with nuance because nuance resists standardization.

In clinical practice, ethical decisions are rarely about rules alone. They are about relationships. About timing. About how a person is responding in their body, not just in their words. About what has been lost, what is emerging, and what cannot be neatly categorized.

Guido often points out that in supervision, ethical clarity emerges through human connection. A supervisee grieving the loss of a long loved pet may need space, presence, and attuned judgment before any checklist applies. An AI response might acknowledge the loss. A human supervisor reads readiness, emotional capacity, and relational cues in real time.

This distinction matters. Compliance operates horizontally. Ethics operate vertically. One ensures rules are followed. The other asks whether the action serves the human being in front of us.

As research like “The Body Keeps the Score” has shown, much of human experience is embodied. Trauma, grief, and anxiety live in the nervous system. No automated system can fully interpret body language, energy, or the unspoken dynamics between people. When ethics are reduced to outputs, those dimensions are lost.

Rebuilding Ethical Capacity

The solution is not to reject technology. It is to recenter ethical practice as a skill, not a rule set.

Guido emphasizes that ethical reasoning must be practiced deliberately, especially in supervision and education. Supervisors can ask clinicians how they arrived at a decision, not just what decision they made. Educators can create experiential learning environments where discomfort is part of the process. Reflection becomes an active exercise, not a postscript.

Curiosity is a safeguard. Questioning sources. Examining assumptions. Pausing before accepting convenience. These behaviors rebuild ethical strength. They remind clinicians that technology is a tool, not an authority.

The Future of Ethical Authority

As AI becomes more embedded in mental health care, the distinction between ethical leaders and rule enforcers will grow sharper. Ethical authority will not belong to those who know the most regulations, but to those who can integrate knowledge with human presence.

The next generation of clinicians will need ethical fluency, not just compliance literacy. They will need to know when to consult a system and when to resist it. When to accept support and when to sit with uncertainty.

Guido’s work reframes ethics as something alive. A capacity that can be strengthened or weakened depending on how it is used. In an era of automation, the most radical act in mental health care may be choosing not to outsource judgment.

Ethics do not disappear overnight. They fade when they are no longer practiced. And they return when professionals decide that being human, fully and imperfectly, is still the point.

Who Is Responsible When the Algorithm Is in the Room?

Who Is Responsible When the Algorithm Is in the Room?

This article orginally appeared in Tech Times on January 9, 2026.

Rethinking Clinical Supervision in AI-Influenced Care

The supervision room has always been a space of translation. A clinician arrives carrying fragments of a session: a tone that lingered too long, a silence that felt weighted, a decision that did not quite settle in the body. A supervisor listens, asks questions, and helps transform experience into ethical judgment. For decades, this exchange assumed something simple but foundational: that clinical decisions emerged from human perception, human reasoning, and human responsibility.

That assumption is quietly breaking down.

Today, clinicians increasingly arrive with another presence in the room, one that does not speak aloud but shapes the conversation all the same. An algorithm has suggested a diagnosis to consider. A documentation tool has summarized risk factors. A generative system has offered treatment language that feels, at first glance, uncannily precise. None of these tools claims to make decisions. They call themselves support, assistance, augmentation.

Yet their influence is real, and often invisible.

This is where ethical supervision now finds itself unprepared. Most ethical frameworks in mental health were built to regulate relationships between people, not between people and systems. They assume that supervisors can trace how decisions are made, evaluate clinical reasoning, and intervene when judgment falters. But when algorithms shape perception before a clinician even knows what they are seeing, accountability becomes far less clear.

According to Dayna Guido, this ambiguity is not a technical problem. It is an ethical one, and it is already reshaping the profession.

Supervision Was Built for Humans, Not Systems

Imagine a supervisee describing a treatment decision with confidence. Their reasoning is clean, well-organized, and aligned with best practices. Only later does it emerge that much of that clarity came from an AI-generated clinical summary they reviewed before supervision. The supervisor is now responsible for guiding a decision they did not fully witness and may not fully understand.

Traditional supervision models offer little help here. They were designed to evaluate human reasoning processes: how clinicians interpret cues, manage countertransference, weigh risk, and respond to uncertainty. Algorithms do not participate in these processes. They bypass them.

Guido, who has spent more than four decades teaching, supervising, and serving on ethics committees, describes this as a structural mismatch. Supervision still assumes that if something influenced a decision, it would be named, remembered, and discussable. AI often works differently. Its influence can be ambient rather than explicit. It shapes what feels obvious, what seems urgent, and what appears negligible long before a clinician articulates their thinking.

The ethical problem is not that clinicians are using tools. It is that supervision is not yet equipped to see how those tools are shaping judgment.

The Invisible Shift Inside Clinical Decision Making

One of the most subtle changes AI introduces is cognitive offloading. When a system reliably organizes risk factors or proposes diagnostic possibilities, clinicians may feel more confident more quickly. Confidence, in clinical work, is not inherently dangerous. But premature certainty is.

AI does not simply provide answers. It trains attention. Over time, clinicians may begin to notice what aligns with algorithmic outputs and overlook what does not. Nuances that live in the body, a client’s micro movements, pacing, affect shifts, may receive less weight than patterns that appear legible to a system.

Supervisors, meanwhile, may hear polished clinical narratives without realizing how much of that coherence was externally generated. The supervision conversation remains fluent, but something essential has changed. The clinician’s embodied intuition has been partially outsourced.

Where Accountability Breaks Down

When something goes wrong in AI-influenced care, the default answer is that the clinician remains responsible. Legally, this is often true. Ethically, it is increasingly insufficient.

If a supervisor does not understand the tools shaping a supervisee’s thinking, can they meaningfully oversee that thinking? If a system is labeled decision support, but its outputs consistently guide clinical direction, where does responsibility actually sit?

The ethical danger is not that clinicians are irresponsible. It is that responsibility itself that has become harder to locate. When an algorithm frames a clinical question before a supervisor ever hears it, accountability does not vanish. It diffuses. And supervision, as it currently exists, has few tools for tracing that diffusion.

“Supervision has always been about understanding how a clinician thinks,” says Dayna Guido. “When algorithms begin shaping that thinking before it ever reaches the supervision room, ethical responsibility does not disappear. It becomes harder to see, and far more important to name.”

A New Model for Ethical Supervision

This is where the profession must resist the temptation to treat AI ethics as a compliance problem. Checklists can confirm whether a tool is permitted. They cannot reveal how that tool has influenced judgment, confidence, or clinical pacing.

Guido argues that supervision must evolve from procedural oversight into ethical inquiry. Supervisors need frameworks that help them ask better questions, not policies that attempt to control every variable. AI literacy matters, but only insofar as it enables deeper reflection.

Instead of asking whether AI was used, supervisors might ask how a conclusion came to feel clear. What information carried the most weight? What uncertainties were resolved quickly, and which were left unexplored? These questions do not accuse. They surface influence.

What Ethical Supervision Looks Like in Practice

In practice, ethical supervision requires a different kind of listening. Guido encourages supervisors to pay attention to subtle signals: language that feels unusually definitive, risk assessments that move too quickly from ambiguity to resolution, documentation that is technically sound but emotionally thin.

Red flags are rarely dramatic. They appear as small shifts away from presence. Effective supervision responds by slowing the process down, inviting clinicians back into their sensory experience of the work, and helping them articulate what they noticed before any system offered structure.

Crucially, this approach avoids fear-based oversight. Clinicians who feel policed will hide their tool use. Clinicians who feel supported will examine it. Ethical supervision depends on trust, curiosity, and a shared commitment to protecting the human core of care.

The Cost of Avoidance

If supervision fails to adapt, the consequences will not arrive as a single scandal. They will accumulate quietly. Ethical erosion rarely announces itself. It begins with small compromises in attention, accountability, and reflection.

As AI becomes more deeply embedded in clinical workflows, the credibility of the profession will hinge on its willingness to confront these shifts honestly. The question is not whether algorithms will influence care. They already do. The question is whether supervision will remain a living ethical practice or retreat into ritual.

The algorithm is already in the room. Ethical supervision now must decide whether it will pretend not to notice, or whether it will evolve to meet the moment, naming influence clearly, holding responsibility carefully, and ensuring that clinical judgment remains accountable not just to outcomes, but to the values that make care humane in the first place.