Rule #1: The AI must promise enough value so that someone develops it.
Developing an AI isn't cheap. That sort of software takes time and expertise to set up, test, and iterate until the AI works as intended. This process isn't necessarily a straight line. When an AI is going into a new area, it requires development, which almost always requires working with unknowns.
One could design an AI washing machine, but as current washing machines work well enough, and the washing machine market is competitive enough, such innovation will likely result in little to no return.
Rule #2: The AI must provide more value than it loses.
We could, for instance, create an AI that assembles Legos for children. For those who love Legos, this would would provide no value. However, I can see some entrepreneur using this to speed up assembly for his pre-assembled kit business. (It's a real thing.)
You can see from the example that one group would see value out of such an AI while a second group would lose value from an AI.
The same is true of cars. Some people would gain, such as those who want to own their own taxi, especially if they aren't otherwise independent. Taxi companies would gain value by cutting payroll. However, car enthusiast would lose value because they want the driving experience. People on a low income would lose value because the cars would cost more to purchase and more to maintain.
Rule #3: Value must be verifiable
It's not enough to claim value, value must be demonstrable. A claim that an AI manages money better, predicts weather better, or find patterns better must be measurable or you don't know whether it actually does something better. Better may mean more accurate, or it may mean shifting through more data than a human can in only a fraction of the time. Better is a metric used by the customer.
Facebook has had AIs that failed to regulate news feeds. They failed this task because the AIs could analyze the new feeds, but they had no practical way of measuring the results. Especially where humans are concerned, analyzing what we want and giving more of that to us can be too accurate of a mirror on ourselves, or lead to provably false notions running amok. The problem here is measuring truth, which nobody has ever successfully accomplished.
Many AIs fail, not because the of the technology, but because the project doesn't have well defined goals. "Do it better" is not a well defined goal.
Rule #4: There must be no cheaper or more effective alternative
Just because an AI is possible doesn't mean that there isn't a cheaper or better alternative. Humans are clever beasts, and while moving the goalposts is bad in a logical debate, doing exactly that can be extremely lucrative if you're the one who moves the goalposts.
Galaxy Zoo was famous for having no budget, but when they asked people to help them identify galaxies, the public gave them so many hours worth of work that they accomplished their huge tasks in two weeks at a fraction of the computational power.