
Overview
Managing outstanding payments and debt collections can be a daunting task for many businesses. The complexity of tracking multiple cases, sending notices, and maintaining clear communication with debtors often results in inefficiencies and lost revenue. Recognizing the need for a comprehensive solution, the amili platform is designed to streamline the entire debt collection process, offering businesses a centralized hub to manage all their debt-related activities effectively.
Objective
The primary objective of the platform is to provide businesses with a robust tool that simplifies debt collection and management. By integrating case tracking, communication, and analytics into a single portal, we aim to enhance the efficiency and effectiveness of the debt collection process. Our platform is tailored to address the unique challenges faced by businesses in managing outstanding payments, ultimately improving cash flow and reducing the burden of manual follow-ups and fragmented systems.
The problem
Businesses often struggle with the complexities of debt collection due to fragmented systems and a lack of cohesive management tools. This can lead to delays in payment, poor communication with customers, and a significant drain on resources. The absence of a centralized system for tracking and managing debt cases further exacerbates these issues, making it difficult for businesses to stay on top of their collections. Moreover, an increasing number of businesses are facing challenges in getting paid, making it even more critical to maintain oversight and ensure timely payments. Our platform addresses these pain points by offering a unified solution that not only simplifies the debt collection process but also provides valuable insights and analytics to help businesses make informed decisions.
Ref. Finanstilsynet, Inkassostatistikk for 2. halvår 2023 rapport
Research
In order to gain a clear understanding of the needs and preferences of the deaf community, extensive user research was carried out. This research was conducted in two distinct phases:
Literature Review and Analysis: A review of published materials was conducted to analyze and evaluate the use and effects of sign language. This included academic studies, surveys, and other documents within utilization of sign language in the healthcare field.
Direct Engagement with the Deaf Community: I actively interacted with numerous members of the deaf community. These conversations shed light on the practicalities of using sign language and the drawbacks of existing communication methods. This ensured that the user research was grounded in actual user experiences and needs.
The research led to the development of detailed personas, each representing distinct segments of the deaf community with their unique needs and preferences.
Anna Middleton et al.'s study, "Preferences for communication in clinic from deaf people," surveyed 999 deaf and hard of hearing individuals in the UK to understand their communication preferences in clinical settings In clinical and healthcare settings, 50% of the participants who used sign language expressed a preference for consultations through a sign language interpreter. Meanwhile, 43% preferred direct consultations with health professionals who are fluent in sign language. Only a small fraction, 7%, were open to spoken consultations, provided the health professional demonstrated strong deaf awareness, such as proficiency in lip-reading.
Complementary research conducted in the United States (Scheier, 2009) and the Netherlands (Smeijers & Pfau, 2009) highlighted the same issue of miscommunication between healthcare providers and their patients. The study in the United States found that miscommunication often led to misunderstandings in both diagnostic and therapeutic areas. In the Netherlands, only 13% of doctors and patients rated their patient-general practitioner (GP) communication as good, while 39% described it as moderate or poor. In a similar survey, U.S. physicians noted significantly greater difficulties in communicating with deaf patients compared to hearing patients (Ralston, Zazove, & Gorenflo, 1996). This communication barrier resulted in deaf patients being less likely to trust their physicians and to understand diagnoses and treatments. In a parallel finding from the United Kingdom, 44% of deaf patients reported difficulties in their last interaction with their GP or health center, a stark contrast to the 17% reported in a general population patient survey (NHS England, 2015 ; SignHealth, 2013).
Many of the mentioned challenges were also highlighted in the conversations I had with various members of the deaf community. Their personal experiences and insights reinforced the need for improved communication between healthcare providers and deaf patients.
Research
Original solution
Sign languages are not universal. Each country, and sometimes regions within countries, have their own sign language, like American Sign Language (ASL), British Sign Language (BSL), etc. This diversity poses a significant challenge in developing a universal translation tool. Just as with spoken languages, sign languages have dialects and idiomatic expressions that can vary widely, even within the same language. Ensuring accurate translation across these variations is a complex task. For the conceptual prototype I used ASL as it aligns best with my native language.
Lots & lots of usability problems
Customer interviews
We wanted to understand how they used the solution, what were their motivations, what were their goals and how did it all work?
Old Information Architecture (IA)
Sign languages are not universal. Each country, and sometimes regions within countries, have their own sign language, like American Sign Language (ASL), British Sign Language (BSL), etc. This diversity poses a significant challenge in developing a universal translation tool. Just as with spoken languages, sign languages have dialects and idiomatic expressions that can vary widely, even within the same language. Ensuring accurate translation across these variations is a complex task. For the conceptual prototype I used ASL as it aligns best with my native language.
New Information Architecture (IA)
Sign languages are not universal. Each country, and sometimes regions within countries, have their own sign language, like American Sign Language (ASL), British Sign Language (BSL), etc. This diversity poses a significant challenge in developing a universal translation tool. Just as with spoken languages, sign languages have dialects and idiomatic expressions that can vary widely, even within the same language. Ensuring accurate translation across these variations is a complex task. For the conceptual prototype I used ASL as it aligns best with my native language.
Language variations
Sign languages are not universal. Each country, and sometimes regions within countries, have their own sign language, like American Sign Language (ASL), British Sign Language (BSL), etc. This diversity poses a significant challenge in developing a universal translation tool. Just as with spoken languages, sign languages have dialects and idiomatic expressions that can vary widely, even within the same language. Ensuring accurate translation across these variations is a complex task. For the conceptual prototype I used ASL as it aligns best with my native language.
Facial Expressions
Sign language is not limited to hand gestures, it also includes facial expressions and body language. Facial expressions can change the whole meaning of a sign, and are important for conveying tone and emotion. Capturing and correctly interpreting these minor changes is incredibly challenging. I tried using Google's facial landmark model, which is designed to recognize and interpret facial cues accurately. It worked quite well, and the model was able to perform with a high degree of accuracy.
Old Information Architecture (IA)
Following the research, I began the development of a solution with focus on addressing the identified communication challenges The goal was to create an intuitive, user-friendly interface that facilitates seamless sign language to text translation. The initial phase involved creating basic wireframes to outline the primary functions and layout of the application. This stage was important for establishing the foundation of the user interface.
Next, I developed a more detailed user interface, focusing on ease of use, accessibility, and visual appeal. The design process took into account color schemes, typography, and iconography that would be easily understandable for our target audience. The translation interface is designed to be real-time and interactive, displaying the translated text as the user signs.
An interactive prototype was then created to simulate the user experience. This prototype was important in visualizing the flow and functionality of the solution.
From the patient's perspective, the doctor's spoken language is translated in real time, providing an immediate and accessible understanding of the conversation. This real-time translation bridges the communication gap, allowing the patient to follow the consultation as it happens, just like a spoken conversation.
Prototyping and results
Following a demonstration-focused prototyping session in which users observed the functionality of the solution without directly interacting with it, the following statistics were gathered to support the potential of the solution:
Other use cases
In addition to its application in medical scenarios, I have also created a prototype for an innovative use of this tool as an accessibility feature in everyday communication. Envisioned as a parallel to the voice recording shortcut in messaging apps, this feature allows for the translation of sign language into text messages. With simple gestures, users can communicate in sign language, and the tool converts these signs into written text. This text can then easily be sent as a message to friends or contacts.