Overview
At Nakisa, I worked on a production enterprise web platform serving real users who operate on large datasets and repeat the same workflows daily — where a broken state transition or a slow endpoint isn't an inconvenience, it's a blocked workflow.
I owned full-stack feature delivery end-to-end: architecting backend changes across the full Spring Boot layer stack, building and wiring Vue 3/Quasar frontend components into REST APIs, driving the backend test suite from fragile to ~90% coverage, and hardening UI accessibility and state handling across critical components.
Backend features
I owned end-to-end backend feature delivery across all six layers of the Spring Boot application — entities, DTOs, mappers, repositories, services, and REST controllers — ensuring data flowed correctly from the database all the way to the API response.
I designed and shipped 8+ REST endpoints supporting high-volume enterprise workflows, integrating PostgreSQL-backed persistence and webhook-driven async processing to offload long-running operations off the critical API path and keep response times fast.
I drove database performance improvements by systematically identifying slow queries using execution plans, tightening fetch logic to eliminate over-fetching, improving filtering so the database engine did the heavy lifting, and introducing indexing where full-table scans were killing response times.
Frontend and API integration
I built and owned Vue 3/Quasar components for complex, table-driven workflows — filtering, inline editing, bulk operations, and reviewing large lists — making sure every interaction felt fast and predictable even when the underlying data and business rules were complex.
I treated every API boundary as a strict contract: typed request payloads, explicit validation handling, and error responses mapped to meaningful user-facing messages rather than silent failures or generic error screens.
I architected consistent state management across components — standardizing the idle → loading → success/error lifecycle — so the UI never silently ended up in a broken or stale state after a failed request, a slow response, or a concurrent update.
Accessibility (ARIA)
I audited and hardened accessibility across the platform's most-used interactive components: dialogs, form controls, data tables, and action menus.
I introduced and corrected ARIA attributes throughout, restructured keyboard navigation flows so every interaction was reachable without a mouse, and ensured screen-reader output was meaningful rather than noisy — bringing components in line with WCAG standards and resolving real accessibility warnings flagged during development.
This work made the platform usable for a broader range of users and removed a category of issues that had been accumulating across the codebase.
Testing
I owned the backend test suite and drove coverage from a fragile baseline to ~90% on the areas I was responsible for — writing the vast majority of unit and integration tests from scratch using JUnit and Mockito.
I tested behavior, not implementation: every test targeted something that actually mattered — input validation edge cases, permission enforcement, data transformation correctness, and service-layer logic under failure conditions — rather than brittle assertions on internal method calls.
I structured the suite for long-term maintainability: small focused fixtures, dependency mocking only where it added clarity, and assertion patterns that made failures immediately obvious. The result was a test suite that caught real regressions and gave the team confidence to ship changes faster.
What I learned
Nakisa taught me what it actually means to ship in a production environment: features aren't done when the happy path works — they're done when edge cases are handled, the API contract is solid, the tests protect the behavior, and the UI communicates clearly under every condition.
The deepest thing I took away was end-to-end ownership. The best engineers I worked with never handed off a problem at a layer boundary — they traced it from the database query all the way to the pixel on screen, and they left every layer better than they found it.
I also internalized what good Agile practice actually looks like in a fast-moving team: small testable increments, trade-off discussions that happen before implementation rather than after, and code reviews treated as a knowledge-sharing tool rather than a gate.