But can you keep telling it that Miracle Whip is NOT mayonnaise until it has learned the truth?These ai models are… complicated and I have limited understanding. To my knowledge they need to be trained similarly to a dog. With positive and negative reinforcement plus a whole lot of information. You don’t really “code” them as you would a normal computer. This makes it very difficult to change things. If it was told that miracle whip was mayonnaise instead of mayonnaise substitute by accident then there’s no easy way to fix it. There not some code you can edit, similarly to how you can’t go in and change what a dog thinks is good behavior by altering its brain.