Many founders think that AI practices are challenging to implement and may slow their business process. By building a huge team the founders believe they can avoid creating a harmful product. The truth is much simpler. Practices that result in better outcomes and make business sense will be effective in reducing the harm. These practices rely on the fact that people, not data, are needed for the successful implementation of AI.
AI relies on having some general policy to follow and, in most cases that makes reasonable decisions. To anticipate every possible input an AI model cannot be trained. If any AI fails, humans must be kept in the loop and designed with them in mind. When human steps in as operators are augmenting the AI system when it is uncertain or users choosing whether to reject, accept or manipulate a model outcome, in the real world these people determine how well any AI-based solution will work.
With an end-to-end AI-driven process many companies have planned to launch some services in recent times. Under a wide range of cases when those processes struggle to function, the most harmed people tend to be those already marginalized. In trying to find out failures, founders subtract one component at a time. But they still try to automate as much as possible. Founders should introduce one AI component at a time. Many processes of AI are less expensive to run with humans in the loop. If founders build an end-to-end system with many components coming online at once, they will find it hard to identify which are best suited to AI.
AI is typically used to automate part of an existing workflow and how much to trust that workflow output is a big question. Before implementing any processes, founders should analyze strengths and weaknesses to reduce the risk of mismatch. The major focus of many AI-based solutions focuses on providing an output recommendation. The recommendations have to be acted by a human once the recommendations are made. According to one founder, customers ultimately used their product more effectively if they had to customize it before use.
A poor recommendation could be blindly followed without context that may cause more harm. If the humans in the loop do not trust the system and no context is there then great suggestions could be rejected. Users should be given tools to make decisions instead of delegating decisions away from them. This approach gives power to humans in the loop to identify problematic model outputs. It was shared by one founder that when their direct recommendations were made by their AI, users didn’t trust it.
If one can limit the degree to which they excite what their AI can do, then irresponsible consequences can be avoided, and products can be sold more effectively. The hype around AI helps sell products. Choice of language can help align expectations and build trust in a product. According to some founders words like assist, augment were more likely to inspire adoption.