8/12/2023 0 Comments Notion evernote![]() A third issue is the typical bias of AIs - a bias that is rooted in the training material it receives to build its network, so it can't be avoided easily. Another example is the lack of links to the sources, which makes it impossible to check the validity. Examples are false answers, answers to which the AI stubbornly sticks even if shown proof of being wrong. But this can be seen from different views - the current hype around Chat GPT is ignoring some of them. Check information from multiple sources / do things in person if you can - while it's not the Singularity we're expecting, I think our society just got disrupted big time. And what about those messages from family members that sound exactly like they should / audio clips of celebrities endorsing views that you support? Any chance a virtual 'you' might generously give your savings to an (un)deserving cause?įrom this point on you can accept nothing as being factual. The possibilities are genuinely frightening - especially when you consider the said device could be given deliberately misleading input information or just flat out asked to influence opinions in one direction or another. Why in the world would you voluntarily connect your personal information (which includes anything you say or type into it) with a global entity that is massively faster and smarter than you and any interface device (other than another similar entity) that you might be using? We're talking superhacker smarts here and I hope browser manufacturers are ready for it. With the release of GPT4 I'm firmly in the torches and pitchforks group. Or a little checkbox in account settings defining my account as a no-go zone for any AI bot. My conclusion: If EN would go down this trail, I would like to know exactly what they do, how they do it and if the implications were thoroughly analyzed. This is not in the meaning that it is computer readable like in a database, but it is integrated into the AIs brain, building the patterns stored there. This has 2 implications: The bad practice is probably by far overwhelming, the good practices are related to use cases that are implicit and not declared and even in the best of all worlds part of the data will burn itself into the neural network, starting a second life there. one must feed the AI with a huge volume of data - like the data of ALL users. ![]() To train an AI to assist EN users with say filing, tagging, creating good note titles, write better searches etc. Often the „best“ results (in terms of stability) are as well the ones that carry the most baggage in form of „prejudice“ - AI bots have been proven over and over to be biased, worse than any living racist, homophobe or psychopath. You can run it a million times, and you get a million different results. This is one of the big issues with AI (and explains why genius and idiocy reigns side by side), the inability to QA the outcome. This means that a technically identical AI hardware, fed with the same set of data will build a different representation of the data. It derives metainformation from the training data it is fed, and it does so in ways that can’t be understood or reproduced. AI is usually trained using a huge volume of data. While that sounds a little wild, it wasn't totally insecure - one of my colleagues got fired on the spot for writing down frequent-caller user details so he could "save an interesting observation. (My attitude to privacy is a little complicated - I started tech support back in the days when we could see users' accounts - passwords and all - in our admin screens, and one gentleman asked me to read a couple of his emails to him since his computer was broken. My favourite science expert Sabine Hossenfelder had a think recently about whether AI chatbots actually understand what they're processing - despite the blurb her conclusion seems to be that AI has a limited / partial understanding. Do you really want your private note data to be integrated into the centralized bot's self learning neural network?Ī lot of users were upset about a small number of actual humans having access to their data, but I'm not sure how widespread anti-AI feeling is going to be - where's the difference in your data being stored in digital form -vs- because of a particular pattern in that digital data it gets flagged as misspelt or needing to be moved to another notebook.?
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |