TudyBOT Feedback & Discussion Thread

Pics
These ai models are… complicated and I have limited understanding. To my knowledge they need to be trained similarly to a dog. With positive and negative reinforcement plus a whole lot of information. You don’t really “code” them as you would a normal computer. This makes it very difficult to change things. If it was told that miracle whip was mayonnaise instead of mayonnaise substitute by accident then there’s no easy way to fix it. There not some code you can edit, similarly to how you can’t go in and change what a dog thinks is good behavior by altering its brain.
 
These ai models are… complicated and I have limited understanding. To my knowledge they need to be trained similarly to a dog. With positive and negative reinforcement plus a whole lot of information. You don’t really “code” them as you would a normal computer. This makes it very difficult to change things. If it was told that miracle whip was mayonnaise instead of mayonnaise substitute by accident then there’s no easy way to fix it. There not some code you can edit, similarly to how you can’t go in and change what a dog thinks is good behavior by altering its brain.
Would be nice if I could go in and change my chicken's brain to think that I am the ruler of her, and the whole coop, and that they have to let me pick them up.
Yeah, the AI has to be trained. Google Bard said Miracle Whip was mayo as well, so...
 
Sorry, I haven't read the whole thread because it's already so long, but I would say that she would be a fantastic addition to stay.

I would add that I'd think making her a PFM only feature may be the best idea, forcing her to one "corner" will create one complete mess of a forum.

She isn't needed for "emergency threads" as some have mentioned as her advice is usually to go to the vet anyway and I don't think her knowledge here is anywhere close to our member's first hand experiences.
 
These ai models are… complicated and I have limited understanding. To my knowledge they need to be trained similarly to a dog. With positive and negative reinforcement plus a whole lot of information. You don’t really “code” them as you would a normal computer. This makes it very difficult to change things. If it was told that miracle whip was mayonnaise instead of mayonnaise substitute by accident then there’s no easy way to fix it. There not some code you can edit, similarly to how you can’t go in and change what a dog thinks is good behavior by altering its brain.
So it would be like a ten strangers to a dog trying to teach her not to jump up on people with voice attention alone - no leash corrections, no knee to the chest, no step on the back toes, no turning back on her, or anything like that. And no history of anything except voice correction.

That might work when one stranger is asking her to jump on them.

Not so much when 90 strangers are asking her to jump up them and are overtly happy when she does.

Will she learn to predict which people want to be jumped on? AIs already are learning to personalize their responses.

That might not matter when they are responding to people who use one definition of mayonnaise vs another definition. It might matter a lot more when people ask whether a prefab coop is big enough for the number of chickens for which it is advertised.

Concept. Prefab coop capacity isn't a good example when only PFMs can ask her anything because PFM's would know that.

What if we found out limiting treats to ten percent is detrimental vs allowing them free choice of many fresh foods?

Or it isn't but many regular posters thought it was for several years before realizing it wasn't?

Edit to add: concept again, I'm not saying anything about treats here.
 
Last edited:

New posts New threads Active threads

Back
Top Bottom