Prisma SAS & Radar COC

In 2026, Arco Educação launched a complete pedagogical tracking dashboard for the SAS and COC brands — bringing together, for the first time in one place, performance data, study habits, and personalized recommendations for every student, class, and school.

In March, 71.8% of principals and teachers gave the highest satisfaction score — across more than 656 responses. A result I wasn't expecting when I took over the tribe in June 2025, with six months to deliver and a product that needed to be rethought from scratch.

71.8% Top CSAT score
656+ Responses collected
6 months Zero to launch
~20 Qualitative interviews

The Challenge

The goal was clear: consolidate every aspect of students' pedagogical routine on the platform into a single experience — homework, assessments, and study habits. A live dashboard that teachers, coordinators, and principals could consult daily to make pedagogical decisions with greater confidence.

Within assessments, there was a particularly valuable front: external assessments — practice tests replicating the Enem format — which would give principals a comparative view across schools, coordinators a subject-by-subject map, and students a clear picture of where to improve.

The deadline was November, so the product would be live at the start of the 2026 school year.

The initial vision

Product in delivery mode

When I joined as design lead in June 2025, the team had already conducted several research rounds and built a vision that appeared to be aligned with stakeholders. The work seemed to be in delivery mode — not discovery.

The conflict uncovered

Data that didn't speak to each other

Terms like "performance" and "participation" had different definitions depending on the context. When we tried to consolidate everything into a single view, the data didn't connect. What had been designed previously would not deliver a coherent product.

The public expectation

Presented at Bett Educar

The product had been presented at Bett Educar, the largest education event in Latin America. Schools and principals were excited. The expectation was real — and public. Students, teachers, coordinators, and principals each expected something different.

My Role

I took on four fronts simultaneously. The team was newly formed — people didn't know each other, had no context on the initiative, and the senior designer who had conceived the original screens decided to leave the company once it became clear the solution would need to be rethought.

In an ideal scenario, the team would have had an integration cycle before accelerating. But the November deadline wouldn't wait.

Product reframing

Together with the GPM (Group Product Manager) and the Engineering Manager, we mapped the business rule conflicts and proposed a coherent MVP — cutting everything that could be evolved post-launch without compromising the core value of the delivery.

User research

I led and coordinated around 20 qualitative interviews with teachers, coordinators, principals, and pedagogical specialists. Those conversations redefined the delivery priorities.

Stakeholder alignment

Arco has a complex structure. Portfolio strategy, customer success, data science, and product teams all had different expectations. I facilitated the necessary alignment to arrive at a shared direction.

Execution and team development

With the senior designer gone, I took on the full scope — from discovery to delivery. I brought the junior designer close: first as an observer of my facilitation process, supporting operational tasks like interview scripts and prototype iterations. Week by week, I delegated more responsibility. By the end of the process, she operated with full autonomy over the topic, and my role shifted to expectation alignment and visibility with stakeholders.

What We Built

MVP focus: external assessment data

The interviews were direct: for the SAS and COC brands, the most valuable data was from external assessments — the practice tests replicating the Enem. That was the data teachers and coordinators used in conversations with families, during grade closings, and in ongoing monitoring.

More than a snapshot of the moment, users asked for a longitudinal view: I want to see how a student evolved over time, not just where they stand today.

Study recommendations

The insight that lit up eyes in the interviews was the possibility of automatic recommendations based on test performance: which topics each student needs to review, where the class has gaps, where the teacher can push further because the group is doing well.

It wasn't just knowing the score — it was knowing what to do with it.

Study habits

Performance without effort context tells only half the story. We included habit data — access frequency, readings, completed tasks, autonomous study — so teachers could distinguish a student with a low grade who is genuinely trying from one who simply isn't engaged.

The experience for each user type

For teachers and coordinators, we designed a simple, filterable list: grade level, class, subject. Fast to navigate for the most frequent jobs — grade closings, parent conversations, pedagogical tracking.

For principals and coordinators, we added a year-start dashboard: is the platform ready? Do all teachers have assigned classes? Are students linked? A historical performance view by subject — so the year begins with a clear picture of each class's starting point.

All interface work was built on Arco's robust design system, which adapts components and colors for each brand. The learning curve was fast — and the system gave us consistency and speed to prototype and deliver at scale, even with a newly formed team.

Result

71.8% Top CSAT score
656+ Responses collected in March 2026

Initial target: 60% top CSAT scores at the start of the school year.

Actual result: 71.8% of principals and teachers gave a score of 5 — across 656+ responses collected in March 2026. Principals and teachers — the users who most drive adoption in schools — were the ones who gave the highest scores.

CSAT below 5 was concentrated among teachers and students from one of the brands — a precise signal for the next iteration cycle. The two brands have distinct focuses: one positions itself as a reference in college entrance exam results; the other prioritizes students' holistic development. That difference in purpose likely influenced how value was perceived — and it's what guides the planned iterations.

What I Learned

Product inheritance is an invisible risk

When you join an initiative that already has research done and stakeholders aligned, the natural pressure is to trust what came before and just deliver. It was precisely by not accepting that premise — by questioning the foundations of what had been designed — that we avoided a launch that would have generated more confusion than value.

New teams need fast context, not long timelines

We didn't have the luxury of a careful onboarding. My response was to bring the junior designer into the process — not to protect her from the complexity, but to structure the context so she could operate within it. It was uncomfortable for both of us at first. It worked.

Volume research doesn't replace focused research

Around 20 interviews could have generated noise. What made them useful was the central question guiding all of them: what would you do differently at your school if you had this data available today? That question separated what was "interesting" from what was actionable.

Essentialism is not lack of ambition

The original vision had an internal name: Relatório 360

The original vision had an internal name: Relatório 360 — a dashboard that would track the full pedagogical journey throughout the year: homework performance, classroom activities, autonomous study, comparative analysis across classes and subjects. We gave all of that up to focus on external assessments.

It wasn't an easy decision — those fronts had real value and would be expected. But the rigor of defining what wouldn't be included was what made it possible to deliver what was — with quality and on time.