University Hub Platform

An academic platform with multiple roles, processes, and usage scenarios

An academic platform with multiple roles, processes, and usage scenarios required aligning fragmented decisions into one coherent product direction.

required aligning fragmented decisions

required aligning fragmented decisions

into one coherent product direction.

Role:

Role:

Product Designer

Scope:

Scope:

Product Architecture

Product Strategy

UX

Research

Type:

Type:

Commercial

Duration:

Duration:

4 months

Overview

A platform used by tens of thousands of users lacked

a single, shared logic.

It served students, faculty, and administration — across multiple campuses, roles, and processes.

From course registration and communication to thesis management and remote work registration.

The product evolved as a collection of local decisions — made independently across departments and campuses.

As a result, the system:

  • behaved differently depending on role and context

  • integrated multiple tools and sources of information

  • included hundreds of user variants

The product evolved as a collection of local decisions — made independently across departments and campuses.

1

1

1

For users, this meant a lack

of orientation:

  • they didn’t know where they were in the process

  • they didn’t know what would happen next

  • they had no single place they could trust

2

2

2

For the organization, this meant constantly “explaining the system”:

  • departments had to manually explain how it worked

  • communication was fragmented and inconsistent

  • the system didn’t scale with growth

3

3

3

In practice, this showed up as:

  • redundant features (“ghost features”)

  • fragmented and misaligned communication

  • lack of clear naming, structure, and processes

Initially, I assumed the problem was in the interface and information architecture. It quickly became clear there was no single version of the platform to analyze.

The problem wasn’t the interface.


The platform wasn’t a single product.

It was a system of 200+ variants shaped by roles, permissions, organizational processes, and external tools.


Each user experienced a different version of the system.

In practice, this meant one thing: there was no single reality to analyse, scale, or design as a coherent experience.

To design the system, I first had to define it.

It quickly became clear the problem wasn’t the interface, but the system’s underlying structure.

I paused the initial direction.

Instead of designing surface-level solutions, I started by analyzing the system’s foundations — working closely with developers and university departments.

I wanted to understand:

  • how roles and permissions were structured

  • which processes were actually being handled

  • how organizational decisions shaped the system

Discussions with developers revealed:

  • the system wasn’t a single coherent application, but a set of variants

  • based on roles and permissions

  • there was no shared logic to build the experience on

  • roles were fragmented and inconsistently defined

University departments:

  • operated based on their own processes and needs

  • found the system only partially aligned with their work

  • handled many tasks outside of it

As a result:

  • there was no single, aligned system model

  • no single source of truth

  • users ended up in different versions of the same product

  • and I couldn’t structure the experience without a clear underlying system

Permission model

Permission Model

I defined the system’s core.

This established a foundation for designing a coherent system at scale.

The system wasn’t a collection of screens.

It was a structure of roles and permissions that defines how it is experienced.

Designing around information architecture would only reinforce fragmentation.

So I shifted the foundation to permissions.

The system wasn’t a collection of screens.

It was a structure of roles and permissions that defines how it is experienced.

Designing around information architecture would only reinforce fragmentation.

So I shifted the foundation to permissions.

I built a permission model based on:

  • primary roles (student, faculty, administration)

  • sub-roles derived from responsibilities (type of studies, teaching role, employment responsibilities)

  • contextual attributes (campus, faculty, academic cycle, field of study, etc.)

  • statuses affecting access to features (e.g. dean’s leave, full-time studies, student council, disability, etc.)

I defined the product scope and connected it to the permission model.

I could see the system not as a set of features, but as a set of actions users actually perform.

The platform consisted of multiple functions and external tools — partially integrated, partially just linked.

I mapped all features and tools and assigned them to:

  • user roles

  • their context

In practice, this meant:

  • analyzing dozens of features spread across different tools

  • mapping them to roles and sub-roles

  • and defining which features are shared, role-specific or context-driven (e.g. study type, stage, status)

Together with the team, we then structured them into three directions:

  • remove

  • improve

  • build from scratch

Feature Scope

Feature scope

Feature Scope

Feature scope

Features stopped being fragmented — they became organized around purpose and meaning.

I designed a shared structure across all roles.

Based on the permission model and defined product scope, I built a single, coherent system instead of separate experiences for each group.

I defined a set of core modules that became the foundation of the entire platform.

Every user saw the same structure, while differences came from available features and variants — not architecture.

This made scalability possible.

Instead of treating the system as a collection of features, I structured it as a set of logical “experience containers”, e.g.:

  • study flow (schedule, classes, grades)

  • student life (events, community)

  • administrative matters (requests, applications)

  • contact and support

Experience containers

Experiance Containers
Experiance Containers

I defined how the system works — and it changed how the product is designed.

Features stopped being fragmented — they became organized around purpose and meaning.

Component model

Component Model

Instead of creating separate interface versions for each role,

I designed a model based on shared components.

Each component was built on a single base structure, with variations driven by:

  • user role

  • context

  • permissions

In practice, this meant:

  • every user saw the same system

  • but experienced it differently

For example — the same calendar component:

student → views schedule


faculty → additionally manages classes

Without the need to create separate systems or views.

The system stopped being a collection of screens and became a set of rules defining:

  • what the user sees

  • in what context

  • and why

This made it possible to:

  • maintain consistency across the entire system

  • reduce the number of components

  • surface only what’s relevant to each user

  • scale the product without fragmenting the experience

I validated the model with users.

Before this stage, the system existed as a coherent
model, but hadn’t yet been validated in real use.

Before this stage, the system existed as a coherent model, but hadn’t yet been validated in real use.

I conducted qualitative research.


The goal wasn’t to evaluate the existing system,


but to understand the user experience as a whole:

  • how users perceive the study process

  • where they lose orientation and sense of control

  • what information is critical to them

  • and which needs are currently unmet

The research included different user types and contexts

(students, PhD candidates, international users, users with disabilities, first-year students, etc.).

The pattern was clear.


Problems repeated regardless of user group.

No single source of truth.

Information was scattered across multiple systems, sometimes available in different places depending on the lecturer

Excess communication without hierarchy.

Users received too many general messages with no personalization or prioritization — so even the most important ones were ignored

Lack of process guidance.

Users often had to search for answers themselves across different places, sometimes missing important deadlines as a result

Lack of context awareness.

The same system was used by different roles (student, PhD candidate, faculty, staff), but didn’t reflect their specific context or needs

Users:

  • didn’t know where they were in the process

  • didn’t know what would happen next

  • had no sense of control

The problem was the lack of a structure that connects information, processes, and decisions into a whole.

The research didn’t change the direction of the project — but made it tangible.

I identified four key needs:

  • centralized information

  • clear process guidance

  • personalization and context

  • a coherent experience model.

The foundation of the platform fundamentally changed.

It stopped being a collection of features and screens.


It became a coherent decision system that brings the complexity of roles, processes, and information into

a predictable experience.

Instead of building separate solutions for each role, I designed a single model that adapts to users through context.

Every user saw the same system, but experienced it differently — depending on:

  • their role (e.g. student, faculty, administration)

  • organizational context (campus, faculty, program)

  • current status (e.g. stage of studies)

I built the system around experiences, not organizational structures.

I defined shared modules reflecting real areas of user activity, e.g.:

  • study flow (schedule, classes, grades)

  • student life

  • administrative matters

  • contact and support

Features were no longer organized by internal structure, but by user intent.

I defined a unified component model.

Instead of creating multiple versions of the same features, I designed shared components with contextual variations.

Each component:

  • had a single base structure

  • adapted based on user context

I structured integrations instead of hiding them.

The platform accounted for different levels of integration:

  • native features

  • partially integrated tools

  • external tools

Instead of treating them as a collection of links, they became part of the experience.

The platform became:

  • consistent and adaptable across all users

  • flexible across roles and contexts

  • scalable without fragmenting the experience

This made it possible to:

  • design processes instead of features

  • create a predictable experience

  • grow the product without losing coherence

Foundation for further development

The system made previously impossible things possible:

  • personalization of the user experience

  • guidance across the entire study journey

  • automation of communication and processes

A natural next step was the “study map” — a system that shows users where they are, what’s behind them, and what’s ahead.


A platform that actively guides users through the entire study process.

Interface is rarely the problem.

Problem is how the decisions behind it are made.

The biggest shift in this project wasn’t a redesign.

It was understanding

that product problems rarely come from the interface.

What users see on the screen is a consequence of decisions they don’t see.

Design doesn’t start with form — 


it starts with understanding the system.

This project is based on real work on an academic platform.


Some details have been generalized due to confidentiality.

Explore other work

Available for meaningful collaborations

krasuskip95@gmail.com

Copy

Available for meaningful collaborations

krasuskip95@gmail.com

Copy

LinkedIn

Available for meaningful collaborations

krasuskip95@gmail.com

Copy

Available for meaningful collaborations

krasuskip95@gmail.com

Copy