Why Slate + Celia beats standalone AI for enrollment
Standalone enrollment AI forces context-switches and a PII handoff. Celia lives in Slate, writes into fields counselors already trust, and never receives names.
A counselor opens Slate first thing in the morning, and again after the first coffee, and again after the 9:30 meeting, and again before lunch, and again after every phone call, and again at the end of the day to clean up notes. The counselors we have talked to estimate they open Slate somewhere between forty and sixty times a day during peak season. Slate is the workspace. It is where the work is done, where the decisions are recorded, where the reports are pulled, where the next action is logged.
Any tool that claims to help with enrollment and does not live inside Slate is asking the counselor to change that rhythm. And the rhythm is not going to change.
This is the framing that drives the architectural decision at the heart of CeliaConnect: Celia writes into Slate via the Slate Source Format — Writeback. Not into a dashboard that sits next to Slate. Not into an “insights portal” the counselor has to remember to check. Not into a Chrome extension that overlays Slate with a floating panel. Into Slate itself, as structured Source Format fields that show up on the student record the same way a manually-entered note shows up, filterable the same way, reportable the same way, exportable the same way.
We want to walk through, concretely, why this matters — and why the other approach, the standalone AI tool, fails in ways that are not obvious until the tool is six months into deployment and nobody is opening it anymore.
Where the data lives
Standalone AI tools import Slate data into their own cloud. The vendor pulls your students via an API, stores them in a database the vendor controls, runs analysis on that stored copy, and presents the output on a hosted web app that the counselor has to log into separately. The analysis lives with the vendor. The output lives with the vendor. Slate is a source; the vendor is the destination.
Celia takes the opposite approach. We pull a defined, non-PII field set from Slate, run our analysis, and write the results back into Slate as fields your counselors already work with — ss_celia_risk, ss_celia_recommendation, ss_celia_engagement, ss_celia_readiness, ss_celia_yield, and so on, named per your institution’s conventions. Slate remains the system of record. Everything we produce lives inside your own CRM. If you cancel, the fields stay. The data is yours, where it belongs, in the place your team already knows how to query.
That is not a cosmetic difference. It is the difference between a counselor saying “let me pull up the AI tool” and a counselor saying “let me sort by risk in Slate.”
Who trains the model
Standalone AI vendors typically train or fine-tune their models on your data, often pooled with data from every other customer. That pooling is usually how they justify their pricing — the platform gets smarter as more institutions sign on. In exchange, your students’ behavioral signals (and sometimes their essays, transcripts, or demographic data) become a training input for a model that will serve your competitors tomorrow.
Celia does not train on customer data. Claude, the reasoning engine Celia uses under the hood, runs inference against each student’s signals fresh, guided by your institution’s Data Dictionary and your cohort baselines. Under our Anthropic enterprise agreement, prompt content is not retained for training. Your engagement patterns do not end up in a shared model that another institution benefits from. You are not a silent contributor to a competitor’s edge.
This matters more as AI models commoditize. The advantage is not the base model — everyone will have comparable base models by the end of this decade. The advantage is the institutional context the model reasons inside. You keep that context. We do not pool it.
How the counselor actually reads the output
A standalone tool gives a counselor a ranked list in a separate app. The counselor has to open that app, scan the list, cross-reference each student by ID or name back into Slate, take the recommended action in Slate, then return to the separate app to mark the student as worked. In theory this is fine. In practice, every context switch between two tools adds somewhere between eight and fifteen seconds of friction per student. Multiply that by two hundred students in a day, and the tool is costing the counselor thirty to fifty minutes of pure context-switching time, every day. A tool that costs the counselor half an hour a day without visibly paying them back does not survive the second month. They stop logging in. It becomes a budget line that renews once out of inertia and then does not renew again.
With Celia, the counselor sorts the Slate list by ss_celia_risk descending, reads the ss_celia_recommendation column inline, clicks into the student, and takes the action. The context switch is zero. The tool the counselor is using is the tool the counselor was already going to use. The AI is invisible to the workflow — it shows up as better columns on the list the counselor is already sorting.
We think this is the single most important operational detail about enrollment AI. Every product decision downstream of it gets easier once you accept it.
What happens when the counselor disagrees
Counselors are experts. They will disagree with the AI sometimes, and they should. A counselor who knows the student personally, who has spoken with the family, who can hear hesitation in a voicemail, has context Celia cannot have and will never have.
In a standalone tool, counselor disagreement gets logged inside the vendor’s app — a thumbs-down button, a comment field, a note the vendor promises to use in future training. That feedback is far away from the student record. The counselor is doing data entry for the vendor’s benefit, not for the institution’s benefit. Most counselors stop bothering after the first month.
With Celia, the counselor overrides the score directly in Slate. They change the ss_celia_recommendation value, or set a manual-override flag, or leave a Slate note explaining why. Everything happens in Slate, which means the disagreement is captured where every other institutional record lives, auditable by leadership, filterable by reporting, and preserved long after the vendor contract ends. Celia reads these overrides on the next cycle and adjusts how it weights signals for that student going forward. The feedback loop lives inside your CRM.
How the VP reports on impact
Enrollment VPs report upward — to a provost, a president, a board, a budget committee. Those audiences do not care about a vendor dashboard. They care about yield, net tuition revenue, melt rate, time-to-deposit, and whether the AI initiative paid for itself.
Reporting out of a standalone tool means pulling one set of numbers from the vendor and another set from Slate and reconciling them in a spreadsheet. The reconciliation is almost always lossy — the vendor’s “engaged student” count is not the same as Slate’s “active lead” count, and the difference has to be explained every quarter. We have seen enrollment teams give up on reporting the AI impact entirely because the reconciliation was too expensive.
Because Celia writes into Slate, every report is a Slate report. Yield probability by territory is a Slate query. Engagement score distribution by cohort is a Slate query. Conversion rate on students who received a Celia recommendation versus those who did not is a Slate query. The VP’s reporting stack does not change. The institutional research team does not have to learn a new tool. The board gets answers in the same format they have always received them, just with sharper underlying signal.
The architectural tradeoff
All of the above is operational. The architectural tradeoff sits one layer below, and it is the thing that matters most to security teams.
Standalone AI tools have to receive student PII to work. That is how they personalize. They need the name to write the email, the email to send it, the phone number to text. The vendor becomes a custodian of every identifying attribute of every student in your funnel — names, emails, addresses, financial aid details, sometimes essays, sometimes transcripts. The vendor’s promise is that their security posture is strong enough to justify that trust. Sometimes it is. Sometimes a breach two years later discovers that it was not.
Celia does not receive student PII. The Slate Query — In Progress we pull from Slate is defined during onboarding and specifically excludes names, emails, phone numbers, addresses, DOBs, and financial aid identifiers. Our Workers run a PII-shape detector on every field in the response and fail closed if anything identifying slips through. Claude sees an internal Slate student ID, behavioral signals, milestone states, and your institutional baselines. The reasoning happens on anonymized behavioral data. The Writeback lands on the correct student because the system round-trips through the Slate student ID — not because we ever knew the student’s name.
That architectural choice is not something we retrofitted. It is the foundation the whole system was built on, because it solves the CISO problem standalone tools cannot solve. For the full technical walk-through, see the no-PII architecture article.
The summary
Standalone enrollment AI asks the counselor to learn a new tool, asks the institution to trust a new vendor with PII, asks the VP to reconcile two systems in a spreadsheet, and asks the CISO to approve a data flow that did not exist before. Celia asks none of those things. Slate stays the workspace. Slate stays the system of record. Slate stays the reporting surface. The AI shows up as sharper columns on the list your team was already sorting.
The best compliment we have gotten from a pilot counselor is that she forgot Celia was a separate product. As far as her day was concerned, Slate just got smarter.
That is the goal.
See how Celia works end-to-end →