Card Sorting Techniques for Technology Services IA
Card sorting is a structured research method used in information architecture practice to reveal how users mentally group and label content, features, or services. Within technology service environments — enterprise platforms, SaaS products, intranets, and digital libraries — it directly informs navigation hierarchies, taxonomy structures, and labeling systems before those structures are built or redesigned. The method surfaces real user mental models rather than organizational assumptions, making it a foundational tool in the information architecture process.
Definition and scope
Card sorting is formally classified as a participatory design technique in which research participants organize labeled items (cards) into groups that make sense to them, then optionally name those groups. The Nielsen Norman Group defines it as a method for discovering how people understand and categorize information, and it appears as a recommended technique in usability guidance published by the Usability.gov program under the U.S. Department of Health and Human Services.
In technology services IA, the scope of card sorting extends across three primary application domains:
- Navigation architecture — determining top-level categories and sub-categories for websites, portals, and enterprise systems
- Taxonomy construction — validating controlled vocabulary terms and facet groupings before formal taxonomy publication
- Feature organization — structuring dashboards, settings panels, and toolbars in SaaS and mobile applications
The method applies equally to new-build projects and redesign initiatives. It is distinct from tree testing, which evaluates an existing or proposed hierarchy rather than generating candidate structures.
How it works
Card sorting sessions follow a structured sequence regardless of format. The facilitating practitioner or IA team prepares a set of cards — typically between 30 and 100 items — each representing a discrete content type, feature, or topic. The upper bound of 100 cards is a widely cited practical ceiling; sessions exceeding this threshold produce participant fatigue and reduced data reliability (Usability.gov, Card Sorting methodology documentation).
The operational sequence proceeds through five discrete phases:
- Card preparation — Each card carries a single label representing one item. Labels must be unambiguous and free of internal jargon not present in the live system.
- Participant recruitment — Participants are drawn from the target user population. A minimum of 15 participants per user segment is the threshold cited in Nielsen Norman Group research for open sort studies to achieve stable clustering patterns.
- Sorting execution — Participants organize cards into groups. In open sorts, they create and name their own groups. In closed sorts, group names are predetermined by the research team.
- Group naming — In open formats, participants label each group they have formed; this labeling data feeds directly into labeling systems decisions.
- Analysis — Results are analyzed through similarity matrices, dendrograms, or agreement scoring to identify where participant groupings converge or diverge.
Software tools used in professional practice — including OptimalSort (Optimal Workshop) and Maze — produce automated similarity matrices and dendrograms as standard outputs, though analysis interpretation remains a practitioner judgment task governed by IA standards and best practices.
Common scenarios
Enterprise intranet restructuring — When an organization with 500 or more employees migrates legacy intranet content to a new platform, card sorting identifies which departmental groupings align with how employees actually search for HR policies, IT documentation, and operational procedures. This directly addresses the findability failures documented in IA for intranets.
SaaS product feature taxonomy — Product teams building or revising settings architectures for IA for SaaS products use closed card sorts to validate whether a proposed feature taxonomy matches user expectations before development commits to a structure.
E-commerce category navigation — Retail and B2B commerce platforms run card sorts to resolve ambiguous product categorizations — particularly where a single product could logically belong to 3 or more parent categories — reducing navigation dead-ends and improving findability and discoverability.
Digital library collection organization — Archivists and IA practitioners working on IA for digital libraries use card sorting to validate subject heading groupings against researcher mental models, particularly when collections cross disciplinary boundaries.
Decision boundaries
Selecting the appropriate card sort variant requires evaluating four criteria against project conditions:
Open vs. closed sort — Open sorting is appropriate when the navigation or taxonomy structure does not yet exist and the research objective is generative. Closed sorting is appropriate when a candidate structure exists and the objective is evaluative — validating whether proposed categories resonate with users. Mental models in information architecture diverge significantly across user segments, which is why open sorts precede closed sorts in full IA research programs.
Remote vs. in-person facilitation — Remote unmoderated sorting via software tools scales to 50 or more participants cost-effectively but sacrifices the ability to capture participant reasoning aloud. In-person or remote moderated sessions capture qualitative rationale at the cost of smaller sample sizes. Projects with cross-functional user populations — such as enterprise systems serving both technical and non-technical staff — benefit from moderated sessions where participant think-aloud data explains cluster formation.
Card count thresholds — Studies with fewer than 20 cards risk insufficient differentiation; studies exceeding 100 cards exceed reliable participant attention spans. The 30-to-100 range represents the professional standard for technology service environments.
Integration with tree testing — Card sorting outputs a candidate structure; tree testing validates that structure against navigation task performance. Organizations conducting only one method without the other produce incomplete evidence. The IA practice reference maintained at Information Architecture Authority treats these two methods as complementary phases of the same structural validation cycle.