Most software is designed by literate people for literate people. The assumption is so foundational that it isn't an assumption — it's invisible. Navigation menus, form labels, error messages, instruction text, confirmation dialogs: every interface element that communicates intent through written language is a literacy requirement, and most product teams never notice the requirement because everyone in the room can read.
In Southeast Asian agricultural and cooperative contexts, the assumption breaks down. A significant portion of smallholder farmer cooperative members have limited reading ability — not just in English, but in Tagalog and regional languages. In parts of the Visayas and Mindanao where cooperative clients use Bayanihan Harvest, functional literacy in any written language is not universal among members who have been farming for thirty or forty years. This is not a failure of the education system — or not only that. It's a design parameter that most technology teams never encounter because the teams are nowhere near representative of the users they're building for.
Designing Bayanihan Harvest's member-facing interfaces for cooperative environments where reading cannot be assumed required a different design process than what a standard UX methodology provides. What follows is an account of what low-literacy design actually means, which patterns work without depending on text, how the design was validated, and what organizational change is required for a tech team to genuinely design for this constraint.
What Low-Literacy Design Actually Means
Low-literacy design is not the same as plain language design, though plain language is a component of it. Plain language assumes the user can read — it just optimizes the reading experience. Low-literacy design assumes that the text may not be read at all, and designs the interface to function correctly under that condition.
The distinction matters because the two design approaches produce different interface decisions.
Plain language design simplifies the words. "Please enter your member identification number in the field below" becomes "Enter your member ID." The reading requirement is reduced but not eliminated. A user who cannot read still cannot use the interface.
Low-literacy design asks what the interface communicates without text. If every label, instruction, and confirmation message were replaced with a blank, could a user who has watched the task performed once complete it? If the answer is no, the interface has literacy dependencies that the design hasn't addressed.
This doesn't mean text-free interfaces. It means that text is a supplement to a visual and procedural structure that works independently of text comprehension. The task flow, the visual hierarchy, the feedback mechanisms, the error states — all of these have to communicate their meaning through visual and behavioral cues, with text providing additional context for users who can access it.
There is a second dimension of low-literacy design that is less commonly addressed: the social dimension of illegibility. For a user who cannot read, requesting help from someone who can read — a child, a literate neighbor, a cooperative staff member — is a normal coping strategy. This has interface implications: if sensitive information (loan balance, savings amount, personal identity information) is displayed as part of the task flow, the need to involve a reader exposes that information to a third party. Privacy design for low-literacy users has to account for the assistance-seeking behavior that the literacy gap requires.
UX Patterns That Work Without Text
Several specific interface patterns consistently perform better for low-literacy users in the cooperative context where Bayanihan Harvest operates.
Task-flow structure over navigation structure. A navigation structure presents the user with a set of options and requires them to identify which option addresses their current need. A task-flow structure presents a single guided sequence for a specific task: the application knows what the user is trying to do and presents the steps in order. For literate users, navigation structure is often more efficient — it allows direct access to any function without following a sequence. For low-literacy users, the navigation structure requires reading the option labels to identify the correct choice, and requires knowing the vocabulary of the software well enough to match the option to the task. Task-flow structure removes both requirements: the user follows a sequence rather than selecting from a menu.
The implication for Bayanihan Harvest's member transaction interfaces was that instead of a dashboard with a set of navigation options — "Loans," "Savings," "Transactions," "Profile" — the primary flow was organized around tasks: "Record a Deposit," "Check My Balance," "Record a Loan Payment." Each task flow is a linear sequence of screens with a single action per screen, designed to be completed from start to finish without requiring navigation choices.
Visual metaphors that are culturally grounded rather than technically conventional. Software design has a vocabulary of visual metaphors developed from desktop computing conventions: folders, files, trash cans, mailboxes, calendars. These metaphors are learned, not universal — a user who has not previously used software has no reason to connect a folder icon with document storage. Low-literacy design requires using visual metaphors that are legible from the user's own experience rather than from prior software experience.
For cooperative financial transactions, this meant using visual metaphors drawn from the physical cooperative context. A piggy bank icon for savings is more universally legible than a wallet icon because the savings relationship (you put money in, it accumulates, you take it out later) is visible in the metaphor rather than inferred from convention. A handshake icon for loan agreements is more legible than a document icon because the social relationship — not the paperwork — is how loan obligations are understood in community lending contexts.
Audio confirmation and ambient feedback. When an interface action has a consequence — a transaction recorded, a form submitted, an error encountered — the feedback for that consequence is typically visual: a success message, a color change, an error dialog. For low-literacy users, the visual feedback depends on reading the message to understand whether the action succeeded. Audio confirmation — a distinct sound for success, a different sound for error — communicates the outcome state without requiring text comprehension. This is a feature that most software treats as an accessibility accommodation; in low-literacy contexts it's a primary feedback channel.
Peer-assisted onboarding as a designed experience. The most effective onboarding for low-literacy users is not self-directed — it is peer-assisted. In the cooperative context, a literate staff member or trained member leader walks the new user through the task flow the first time, completing the task together. The interface has to be designed for this mode: the task flow should be demonstrable by one person while a second person observes, the key action moments should be visually clear enough that an observer can follow without being told what to look for, and the flow should be short enough that a single assisted session is sufficient to establish the pattern.
This is a design requirement that doesn't come from the user's perspective alone — it comes from observing the social context of technology introduction. When a new member receives their loan disbursement, the cooperative staff member who processes the transaction is present. That transaction moment is the primary onboarding opportunity, not a separate training event. The interface has to treat that co-present moment as the designed onboarding context.
How Bayanihan Harvest Was Designed for This Constraint
The design decisions in Bayanihan Harvest that reflect the low-literacy constraint were not all planned in advance. Several were required by what we observed during early deployment, and the design evolved in response to what cooperative staff and members actually did with the interfaces rather than what we expected them to do.
The most significant structural decision was the separation of the staff interface from the member interface. The staff interface — used by the cooperative bookkeeper, the credit committee, and the general manager — requires functional literacy. These are trained roles with educational prerequisites. The member interface — used by individual cooperative members to check balances, view transaction history, and access their loan records — was designed to function at low-literacy levels.
This separation was not obvious at the start. The initial design assumed a single interface at different permission levels, with the interface complexity varying by role. What the separation addressed was the insight that the literacy assumption is not constant across roles — the bookkeeper is literate, the farmer member may not be — and designing a single interface that works for both requires making it work for the more constrained case throughout, which produces an interface that is unnecessarily limited for the literate staff user.
The member interface's primary screen shows a single large number — the member's current savings balance — without labels. The number is large enough to read at arm's length from a phone held by someone with moderate visual acuity. Below it are three icons, each with a single short word, for the three most common member actions. The design went through six iterations before field observation confirmed that a member who had completed the task once with assistance could complete it independently the second time. Every prior iteration had at least one decision point where field observation showed members pausing, looking to the staff member for guidance, or pressing the wrong option.
The field observation methodology was not a standard usability test. It was cooperative staff watching member interactions during actual transactions and recording specifically where the member looked to someone else for guidance, where the member expressed uncertainty verbally, and where the member made an incorrect action. These are the actual failure points in a low-literacy context — not the answers to "what would you do if..." questions in a controlled setting.
The Testing Methodology Needed to Validate Low-Literacy Designs
Standard usability testing methodology produces misleading results when testing with low-literacy users. The failures of standard methodology are specific:
Think-aloud protocols require verbal articulation that many low-literacy users are uncomfortable with in a formal test setting. Users who would clearly express confusion through their behavior — pausing, looking around, pressing randomly — suppress that behavior in a formal observation session and report that they understood the interface. This is not deception; it's the social calibration that most people apply in formal contexts. The result is that think-aloud sessions produce more positive assessments than actual use would predict.
Task completion as a success metric misses the assistance-dependency rate. If a low-literacy user completes a task but required verbal guidance at two points in the process, that task completion is not independent success — it's assisted completion. Standard usability metrics count the completion and miss the assistance. The relevant metric for low-literacy design is not whether the task was completed, but whether it was completed without guidance the second time the user encounters it.
Recruiting users who match the target literacy level is difficult without making literacy itself a screening criterion, which is socially sensitive. The practical approach is recruiting through cooperative staff who can identify members who will represent the relevant literacy range — not in a clinical assessment sense, but in the practical sense of "this member handles their own paperwork" versus "this member typically brings a family member to handle paperwork."
The testing approach that produced usable signal was observation during actual cooperative transactions — not simulated tasks — with cooperative staff present in their normal role, and a separate observer whose only job was to record behavioral signals of confusion or uncertainty without intervening. The presence of the cooperative staff member in their normal role eliminates the formal test environment effect: the member is completing a real transaction, not performing for an observer. The behavioral signals — the glances, the pauses, the repeated taps on the same button — are more reliable indicators of interface failure than self-reported difficulty.
The Organizational Change Required
The hardest part of designing for low-literacy users is not the design methodology — it's the organizational precondition that makes the design methodology possible.
Design teams that are not recruited from or embedded in the user community will consistently underestimate the literacy gap and overestimate the legibility of their designs. This is not a failure of intelligence or empathy; it is the predictable consequence of the designers having internalized a literacy-dependent interaction model so thoroughly that they cannot perceive the interfaces they build as dependent on it. The interface that seems obviously clear to a literate designer will present genuine decision points that low-literacy users cannot resolve without assistance, and those decision points will not be visible to the designer until they are surfaced by field observation.
The organizational change is threefold.
Proximity to users has to be built into the design process, not treated as a research phase. A research phase that produces user personas and journey maps from interview data does not provide the same design input as having a designer physically present during cooperative transactions on a recurring basis. The difference is between data about users and direct observation of users. The former is processed and summarized before it reaches the designer; the latter is unmediated. Low-literacy design requires the unmediated version.
Field feedback has to be able to change the design before launch, not after. The standard product development model treats user feedback as post-launch iteration input. For low-literacy users who cannot report interface failures through conventional feedback channels — they won't submit a support ticket or rate the app — the field observation during development is the primary feedback channel. If the development process doesn't have a mechanism for field observation to produce interface changes before the design is finalized, the design will be finalized with its low-literacy failure points intact.
The success metric has to be independent use, not initial assisted use. Products designed for low-literacy users are often evaluated by whether the user can complete a task during a training session. Training sessions with dedicated facilitators are not representative of the conditions under which the product will be used after the training is over. The relevant success metric is whether a user who completed a task with assistance in their first session can complete it without assistance in their second session, in their ordinary context, without the facilitator present. Building this metric into the product evaluation process requires following up with users after initial deployment — which is a cost that most programs don't plan for, and that produces the adoption stall at month eighteen when the reality of post-training use becomes visible.
Designing software for low-literacy users is a solvable problem. The solution requires taking the constraint seriously as a design parameter from the beginning — not treating it as an edge case that can be addressed with translation or simplified language. It requires field observation as a primary design input, not a research phase. And it requires an organizational culture that treats the distance between the design team and the user community as a risk to be actively managed, rather than as background noise that user research can adequately compensate for.