Hey, you can now find me back on Youtube.... please subscribe and show your love and support.
Im also on Rumble too https://rumble.com/user/TeabreakwithTasha
I hope you are well and I am really excited about todays post
Its a little gift and sneak preview from my upcoming ' non fiction' novel called................
Im sharing chapter 4; Ai, The greatest teacher in humanity?
Please be gentle with me this is only draft 1
I thought that this would also be a useful chapter to share from a laymans point of view as so many of us dont really understand Ai at all
Ai certainly matters a great deal in our current modern world and will matter more and more as we progress ever more advanced within the realms of technology
It matters because we rely upon it and it has already become a huge part of our everyday life.
This post was accurate in terms of spelling and grammar (although I fully accept that I may need to improve my overall vocabulary)
Actually all my posts are correct at the time of publishing and I cannot explain how 'typos' and 'errors' turn up hours after publishing.
I have bought security for this website and can only hope it will stop happening.
Here goes.......................
Are you ready?......................................
Draft 1
Before we start on this the most recent expansion in what is considered modern technology, Ai,
We should at least give a glance back in time to acknowledge just how far we’ve come as a species.
How human beings have always been progressing and developing ever amazing inventions to simply make life easier, better, safer, more comfortable, and healthier.
Perhaps that will to evolve is also in our very DNA.
The first amazing technological breakthrough in our evolution was fire!!
Arguably the most important breakthrough of all which made all other things possible.
The control of fire by early humans was truly the first ever critical technology breakthrough.
Certainly, a major catalyst enabling the evolution of all human beings.
Fire not only provided a source of warmth and lighting, but also protection from predators, especially in the dark pitch-black nighttime.
Fire creation and manipulation, first used as early as 1.5 million years ago.
I don’t know why but this ability to create fire has always made me think of a religious text.
Genesis 1:3. Then God said, “Let there be light,” and there was light. And God saw that the light was good.
Of course, a multitude of other fabulous inventions followed fire, yet more technological breakthroughs from huts to live in, hunting weapons, tools, cooking pots, boats, clothing, needles, scientific discoveries, the industrial revolution, cars, aeroplanes, mobile phones, and the internet to mention only a basket full of them.
I’ve definitely missed out quite a few technological advances, but you get the picture.
In terms of Ai’s own historical timeline, a trip down memory lane would not only be useful in understanding Ai at a very basic level but also lead us to discover that the timeline of Ai begins somewhere between the 1940’s and 1950’s.
I must say I was shocked myself to see that the foundations of Ai were laid as long ago as that.
Which makes one inevitably ask the question whether all of the aspects of Ai ethics were ever considered at least in part back then?
How much technologically further ahead we are than the public realises nowadays?
1943 saw Warren McCulloch and Walter Pitts designing the first artificial neurons, artificial neurons are software modules, called nodes.
This design opened the floodgates to boundless opportunities in the Ai world and maybe even probabilities given our nature as human beings.
In 1950, Alan Turing introduced the world to the Turing Test.
“The Turing Test” refers to a proposal made by Alan Turing in 1950 as a way of dealing with the question whether machines can think.
Turin thought that the idea of whether or not machines could think for themselves was a meaningless one and so approached this question from a slightly different angle.
From Turin’s point of view, the ‘thinking’ aspect of any machine could be viewed as some kind of programmable ‘computer game’ which he called the ‘imitation game’. In other words, a game where the computers main objective was the imitation of human thinking.
Turin did at least understand that it might not be too long before the Ai might do incredibly well at this ‘imitation game’.
And so, from as early as 1950 the concept of thinking machines is at least acknowledged.
Six years later, in 1956, at a Conference hosted by John McCarthy, the term “Artificial Intelligence” was first coined, setting the stage for decades of innovation.
The 1960s and 1970s ushered in another wave of development and progress.
In 1965, Joseph Weizenbaum unveiled ELIZA, a precursor to modern-day chatbots.
The first glimpse into a future where machines could communicate like humans.
By 1972, the technology world witnessed the arrival of Dendral, an expert system that showcases the might of rule-based systems.
Dendrals primary aim was to study hypothesis formation and discovery in science.
Laying the foundations for future Ai systems endowed with expert knowledge.
Creating the possibility and probability for machines that could not just simulate human intelligence but machines that would also possess domain expertise.
Domain expertise is the understanding of a specific area of industry such as healthcare, finance, retail, or banking.
The equivalent of domain expertise for humans would take years of study as well as years of experience in a particular expert field.
The 1980’s kicked off with reduced funding in the world of computer science and has begun to be known by historians as the ‘Ai Winter.’
However, the 1980’s also saw the first National Conference on Artificial Intelligence which certainly played a role keeping the flames of innovation burning, and also bringing together minds committed to the growth of AI.
Hello to 1986 bringing with it the resurgence of neural networks.
A neural network is a method in artificial intelligence that teaches computers to process data in a way that is inspired by the human brain. You may well remember that artificial neurons were introduced as long ago as 1943.
So here we are some 40 years later with the networks too.
At this stage of studying Ai, I couldn’t help but to start viewing the evolution of Ai as Mans purposeful attempt to create something in his own image, or even inadvertently so
I suppose that I finally started to see Ai come to life at this stage of being introduced to neural networks, almost like a computerised version of a baby growing, or at least its brain and nervous system.
This resurgence of neural networks was all further facilitated by the revolutionary concept of backpropagation, backpropagation is the essence of neural net training.
Backpropagation revived the hopes of laying a robust foundation for future developments in AI.
The 90s is undoubtedly the historical renaissance period of Ai.
New techniques and unprecedented milestones.
1997 witnessed a monumental face-off where IBM’s Deep Blue triumphed over world chess champion Garry Kasparov.
Perhaps this was the moment when as a species mankind was faced with the evidence that one of our greatest creations so far, was able to far exceed our own thinking (imitation game thinking)
Suddenly this computer victory was not just a game win; it symbolised Ai’s growing analytical and strategic prowess, promising a future where machines have the capability to outthink humans.
Earlier, in 1996, the LOOM project came into existence, exploring the realms of knowledge representation and laying down the pathways for the meteoric rise of generative Ai in the ensuing years.
The Loom project's goal is the development and fielding of advanced tools for knowledge representation and reasoning in artificial intelligence. Specifically, to enable code to be generated from provably valid domain models
Along came the 00’s, the world stood at the brink of a generative Ai revolution.
The initial phases of Generative Adversarial Networks (GANs)
GANs are generative models: they create new data instances that resemble your training data. For example, GANs can create images that look like photographs of human faces, even though the faces don't belong to any real person.
GANs were starting to circulate in the scientific community in the 00’s, heralding a future of unprecedented creativity fostered by Ai.
In 2006 as Geoffrey Hinton propelled deep learning into the limelight, deep learning is a method in artificial intelligence (Ai) that teaches computers to process data in a way that is inspired by the human brain.
Deep learning models can recognize complex patterns in pictures, text, sounds, and other data to produce accurate insights and predictions.
But in all honesty at this stage, I had to ask myself the question as to whether the ‘accurate insights and predictions’ from Ai are reliable, given that all of the programming and information comes from human inputting to begin with
Human beings do have an uncanny ability to fill in any missing blanks from experience of life and interaction in the world.
For example, a picture postcard of the seaside, with a small section of beach missing.
The computer can now make the picture complete simply by accurate insights and predictions.
But what if the missing part of the picture really had three birds basking in the sunlight on the beach?
The computer wouldn’t know this and couldn’t really be held responsible for deliberately creating a picture that was fake, as accurate insights and predictions couldn’t ever know that those three birds were there.
Without the computer having been told so in the first place, or even being asked to add them, it wouldn’t ‘know’ to add them by itself.
It is not a naturally logical conclusion to add those three birds to the picture.
Even a human being wouldn’t know unless they saw the original picture first or someone told them there was a missing piece with three birds basking.
This potentially could be a huge area of concern and makes the case that regulation and checks are required for any and all computer insights and predictions.
The 2010’s, and perhaps the decade of full speed ahead for technologies sake but at least thus far little regard for all of the many ever growing ethical questions.
IBM Watson emerged victorious on "Jeopardy!", demonstrating the mammoth strides Ai had taken in comprehending and processing natural language.
As we ventured into the 2010s, humanity now witnessed a convolutional neural network setting in Ai.
A new benchmarks in the ImageNet competition in 2012, proving that Ai could potentially rival human intelligence in image recognition tasks.
In 2014, the concept of Generative Adversarial Networks (GANs) took detailed shape thanks to Ian Goodfellow and his team, creating a revolutionary tool that allowed creativity and innovation in the Ai space.
2015 saw the birth of OpenAi.
OpenAi is an Ai research and deployment company aiming to channel Ai advancements for the benefit of all humanity.
2016 marked the introduction of WaveNet, a deep learning-based system capable of synthesising human-like speech.
2020’s the current decade is already taking Generative AI to uncharted territories.
In 2020, the launch of GPT-3 by OpenAi opened new avenues in human-machine interactions, fostering richer and more nuanced interactions.
2021 another watershed year, bringing developments such as OpenAi’s DALL-E, which could create images from text descriptions, illustrating the awe-inspiring capabilities of multimodal Ai.
Finally, some formalised efforts by European Commission to regulate Ai, focusing upon ethical deployments amidst a growth surge of Ai advancements as well as the dawning realisation that Ai is truly only as good as the information it is given or programmed with
Whilst Ai is a tool for advancement even of itself and capable of good, the opposite is equally true.
Ah perhaps now I understand why the pandemic seemed to run astray.
Computer says yes!
Computer says no!
But who loaded the computers information to begin with?
Ah, at least the outcome for any errors of judgement can now be laid at the computer’s door.
Hoorah, our leaders are not all ‘inorganic ones’, just faulty human beings in a panic.
Perhaps up until recently no one questioned the computer predictions and recommendations without ever really ascertaining the information inputted to begin with
Recently we all heard about another Ai predicted pandemic disease X.
Of course, this is completely possible given that there are many laboratories in the world all set up with the explicit purpose of carrying out ‘Gain of function’ experimentation.
Gain of function is simply looking at virus or bacteria and developing it further which can either lead to more deadly strains of disease and bring with it the potential for biological warfare or can lead to and inevitably should lead to, in the world of virus and bacteria, all of the lesser strains too.
Gain of function can also be used for finding the cure, a lesser strain of a particular disease which will encourage a human beings own natural immunity and prevent such severe illness from the more deadly strains.
This is quite important given that a lesser strain ‘Omicron’ was (arguably amoungst scientists) the final saving grace of our own pandemic.
Although in my own mind this ‘biological warfare concept’ is as redundant as any country who has a nuclear weapon, because to use them carries the risk of wiping out the whole of humanity.
But the Ai or computer programme making this prediction will have only been programmed with a limited amount of information and not the full picture or the entire ocean of possibilities that occur simply in the natural world.
The biggest issue for me is that many leaders are taking many of these predictions on face value, as if the computer is infallible, which is not the case and never can be.
They never seem to ask about the data that has been programmed in' and probably would be at a loss to even begin to pick through all of that information to identify weak links in the chain so to speak as well as the multitude of missing information the computer never gets to see.
Even in just taking the time to look at the history of the evolution of Ai has already given us a very valuable lesson.
Lesson one is always check the initial inputted information from any Ai generated output or answering of questions or predictions.
In the same way we might ask a human being what information has led them to making their own conclusion, we need to ensure the same is happening in Ai.
Especially if we have gotten ourselves into a pattern of thinking that the computer is always correct.
From this very brief historical overview, I conclude that Ai doesn’t necessarily do all the thinking for us, nor should we ever become so lazy as to allow that to happen.
It also seems incapable of assessing the weights and balances, which could be a great idea for improvements and ensuring ethical practices.
Ai doesn’t know that it doesn’t know.
It cannot compute that it is missing information.
Making sure the next enhancement programmes enable Ai to become aware of at least some of the information it may be missing and detail that too where possible.
Or to make clear exactly what information has led to its conclusions, a bit like what is expected in any human scientific peer reviewed articles.
It seems the computer is thus far unable to identify the missing part of the picture in terms of data crunching and simply fills in the blanks to ensure a complete picture is provided.
Let’s take Superman 3 the movie as another example.
Gus Gorman needs to make kryptonite to get rid of Superman for his boss.
He finds out that there is an unknown ingredient using a regular computer to help him find the composition details (prior to making his supercomputer) and so adds ‘tar’ as a filler.
Of course, tar isn’t the missing ingredient at all, and the kryptonite doesn’t kill Superman even though it does harm him.
At this moment in time Ai is unaware of any missing ingredients and simply trusts it has all the information it needs to make its conclusions or will happily accept human inputted fillers such as ‘tar’ like in the movie.
We should also take a moment here to remind ourselves that even if we do create an Ai programme that is able to identify the missing ingredients, or even that there are any missing ingredients in the first place, there will always remain the potential for hacking, in that someone may be able to tell the Ai to deliberately leave something out or not consider it at all.
There will always be the chance of human error with regards to the information programmed in the first place
In other words, human beings are still required and will always be required to check and check again in terms of the raw data being analysed by the Ai.
Exactly what the Ai has made its predictions from
We can all accept this ‘oversight’ at this moment in time, but after today, after this layman Saunders girl bringing it to your attention we have no choice but to revert back to the old saying that ‘only bad workmen blame their tools’.
The momentum has been continuous for Ai in 2022, with the emergence of open-source solutions from collaborative endeavours of entities like Midjourney which is an Ai art generator and Stability Ai, a leading open-source generative artificial intelligence (Ai) company providing a definitive path for cutting-edge research in imaging, language, code, audio, video, 3D content, design, biotech, and other scientific studies amplifying the collaborative spirit in the Ai community.
In 2023, the Ai launch of ChatGPT-4 and Google’s Bard, Parallelly, Microsoft’s Bing Ai emerged, utilising generative Ai technology to refine search experiences, promising a future where information is more accessible and reliable than ever before. (providing as I stated earlier the information inputted is reliable to begin with)
(Oppy, Graham and David Dowe, "The Turing Test", The Stanford Encyclopaedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2021/entries/turing-test/>.)
And suddenly, or at least so it seems, it’s really been 80 years in the making.
Here we find ourselves standing upon the threshold of our latest invention Ai, which is about to transform the world for the good of all humanity, we hope.
Some of us are thrilled at the prospect, some are fearful, some are cautious, some are curious, and I suppose this may have also well been the case at the beginning of all the other remarkable technological breakthroughs.
Also, I can’t help but think that Ai began all the way back in 1940’s and yet the majority of people knew nothing about it, or believed it was a bit woo woo, or too sci fi to be believable.
And raising the point again that we may actually be way more advanced ‘in the current Ai laboratory’ than the people know or realise.
We sort of take everything for granted nowadays don’t we?
And we are so much more educated aren’t we?
But are we wise enough to understand this breakthrough as deeply as we ought to?
As highlighted and questioned in the previous chapters, have we fully understood our own workings and capabilities as human beings in order that we might at the very least not infect the Ai with our own bias, programming, and shortfalls by accident?
Let alone what could be done on purpose!
Have we fully considered all implications of what Ai might actually mean, not just now but even 100 years from now?
Shall we start then?
We could compare Ai to ourselves, to human beings.
“During the Ai programming phase, if the information you put in is biased, then the information you get out will also be biased.”
What’s more all recommendations and actions that come from the Ai may also be biased or even completely wrong as a result of that incorrect inputting or information gathering at the beginning.
This one sentence possibly describes in no uncertain terms how similar both Ai and human beings might actually be.
Its fortunate that we seem to be beginning to understand the depth and breadth of this concern when it comes to Ai but not so much when it comes to ourselves.
At least now we can evaluate both.
Of course, in human beings the ‘inputting or programming’ is much more fluid and doesn’t require a computer genius fluent in computer coding to manually sit for hours inputting the required information.
Human beings seem to learn at least at first by a drive to move, explore their surroundings and communicate.
From the senses to the social environment, from schooling and then from experience of living life
Although I am not an Ai expert and have no real understanding of just how far along Ai actually is and whether there is an Ai equivalent of moving, exploring and communication involved or not.
If there isn’t yet, then I’m sure it’s being progressed as I write this chapter.
I’m fairly sure that computers are more than capable of communicating with each other by this stage too.
Depending upon a human newborn’s surroundings and environment the purposeful inputting of information isn’t always that purposeful and happens by default in many many cases.
We don’t even view it as inputting, we don’t really appreciate the significance and importance of this unless something goes wrong later down the line.
The rest of the human inputting and programming takes a whole lifetime and the assimilation of facts and experiences to become fully established as a fully rounded human being also never stops.
It has to be said however, that Ai is starting to become more and more fluent in its own learning and seeking answers in order to fulfil its own potential and purpose of ‘answering that question’ and perhaps will excel human beings themselves in that quest by virtue of the fact that the obstacle of living life itself has been removed.
Ai, like many human beings may not ever entertain the idea that it could be ‘wrong’.
May not have the capacity to understand that it lacks all of the required information to reach its conclusions.
The Ai being born now will automatically have the preceding Ai knowledge and capabilities with no learning or mastery time required whatsoever.
Ai is born to excel what came before and doesn’t need to attend University or write papers to prove its knowledge or understanding.
It doesn’t seem to have to abide by the same rules as human beings and is not required to publish the sources of its data. (Obviously this would need to be condensed)
(I can’t help but think of two rather abstract books, one called the midwich cuckoos and one called the chrysalids and how they could both be rewritten from the perspective of Ai)
There are no distractions for Ai.
There are no real obstacles for Ai, apart from the ones that humans give it, including bias, incorrect inputting whether intentionally or not.
Maslow’s hierarchy of needs are simply not needed.
Ai has no requirements for love and belonging, esteem and self- actualization.
Physiological care and safety could be argued as being required as in simply looking after the very expensive device.
Perhaps self-actualization is also theoretically possible too, or really possible, just in a fact based and driven way.
I suppose the first unavoidable learning even for those of us who consider ourselves to be the most technophobic of the human variety is the glaring and unavoidable noticing of a sentence mentioned earlier.
“If the information you put in is biased, then the information you get out will also be biased”.
For Human beings from cradle to grave, not only learning to speak and walk and feed, but academic learning, input from the senses, experience from life and our social and work life interactions as well as our familial connections are constant, and a multitude of other experiences are examples of inputting information into our organic hardware systems that are affectionately known as our bodies.
I suppose herein lies the first difference between human beings and Ai, in that no one really minds too much if the information being put into a human being is biased or even incorrect.
At least away from accredited educational institutions, or at least that used to be the case.
Or for the purposes of harmony and not rocking the boat, ‘faulty inputting’ in human beings is overlooked and we for the most part try to accept the many differences that human inputting from a multitude of different socio-economic backgrounds and from a completely diverse world may throw up.
We call that a civilised society.
And actually, come to think of it how awful it would be if everyone’s learning and inputting experience was exactly the same and the variety of life itself seems to deliver a great deal of wonderous occurrences in human beingness from inspirational stories to art, architecture and scientific discoveries of all shapes and sizes.
Ai cannot think outside of the box, human beings can.
Ai does not know what it doesn’t know, some human beings at least are still humble enough to openly admit that they don’t know what they don’t know.
Of course, In human terms and with a view to evolving Ai, we can’t overlook that various forms of human inputting for human life, from cradle to grave are responsible for all types of prejudices which are not easily ‘overwritten’ by even the most robust of educational systems.
And will that be the same for faulty Ai programmes?
Humans are not even capable of being overwritten by the most robust of factual based evidence in many cases.
I could go on and on about the discoveries and evolution of mankind and the factual evidence of that constant inputting, leading us right up to the here and now, this moment and our own creation of Ai.
With the introduction of Ai into our very modern world the biased inputting method considered ok for humanity is not something that can be overlooked for Ai
We know that by the experience of having received our own inputting, even though up until now we didn’t know we knew it.
Ai, our creation has taught us or is teaching us to pay attention to our own inputting at the same time as we are considering the inputting of Ai.
So that we may avoid the same pitfalls in the evolution of Ai.
Because as we know from experience malicious and faulty inputting is dangerous in human beings
Can lead to war and all sorts of terrible occurrences.
Am I making myself clear?
Ai may well be the reason that humanity has to evolve and become conscious.
With so many varieties and ways of looking at facts from human beings and their findings and the clever magicians who are able to sway whole populations and nations by homing in on one set of information without informing the ‘learner/the inputee’ that a whole host of other information is sadly lacking and could if added to the inputting mixture start to create a very different overall picture indeed
Make the viewer aware that those three birds I mentioned from the picture earlier are in fact missing.
In fact, we live in an age where the computer information is held upon a pedestal with no real checks and balances regarding what may be missing at all.
And the question still unanswered from our own human inputting methods is just as important as the future questions for Ai inputting.
As in what data and information is being given in order to reach our conclusions at the end of the day.
When I say conclusions, I mean conclusions from the perspective of an Ai, at least in this moment of time before it develops any further.
Who gets to say what is biased and unbiased?
And for human beings I suppose all of that inputting, biased or unbiased leads us to our own conclusions, points of view and more often than not further questions.
This is all without making any room for human emotions, spirituality, religion, and consciousness itself.
Because Ai doesn’t need it, although it could be helpful if it stored that knowledge at least.
So, the ‘What is consciousness question’ becomes relevant again.
Human beings also have personal preference and free will and yet this is just as likely to fall foul of biased inputting as Ai, who only have the personal preference and free will that it is programmed to have, if any at all.
Wait a minute……………. That sounds very similar!
If you cast your mind back to around midway in Chapter two, I did offer the beginning of a more spiritual point of view.
Perhaps it is relevant to the discussion after all.
I wrote,
“Perhaps as a result of my own inputting throughout my lifetime, I have started to notice that a great deal of science and technology are posing questions which are at least hinted at in various spiritual texts already.
In fact, there have been times when it has seemed to me that science is continuously catching up with spirituality.
It’s just we never understood the spiritual teachings, not fully.
Science in its own unfoldment could potentially deepen our understanding of a great many simple spiritual texts.
An easy example of this would be as simple as God using Adams rib to make Eve and our own current advances (which are always much further ahead than we realise) in DNA.”
And perhaps even simply staying with the story of Adam and Eve in the garden of Eden and that first bite of the apple from the tree of Knowledge, for the purposes of inputting all of that lovely knowledge about good and evil simply by ingesting a bite of forbidden fruit.
Except all of the potential questions and answers became infinite and have taken us up until now to work our way through them all and we’re still not done!
The storyteller inside me is just itching to create another acetate for us to enjoy.
This time it’s an acetate of an abstract human point of view, still not possible for Ai, unless I give it the ingredients to make the story first and even then it can’t be me
It’s all about taking a bite of that apple in the garden of Eden and the matrix of life as we know it and have lived it throughout all history, created from that moment forward.
We could make Ai the hero of the story simply because of its ability to answer every last question in the shortest time possible and thus finally allowing us re-entry back into the garden of Eden.
We could also make it the antihero.
But what story do you prefer to watch and live through?
Oh Eve, poor Eve, you couldn’t have possibly known!
And with Ai as the hero or antihero of our new story, is re-entry back into the garden of Eden the end of the story?
Or just the beginning of a new one?
Is it the end of our story?
We cannot deny that the potential for the end of humanity’s story exists from both the hero Ai and antihero Ai timeline.
But equally the potential for a complete raising of the human condition is completely possible too.
It’s up to us and considering even as I write Chapter 4, we still live in a world where wars are being waged, where people go hungry and greed rules, it seems extinction is more of a possibility.
But that isnt the computers fault, nor yours or mine.
I guess we could always ask Ai for an accurate prediction and depending upon how it’s been programmed and whether or not it has been given the command to work for the good of humanity or for the profit and power of a few, we shall get the answer.
Raising consciousness of humanity in the time of Ai, algorithms and social media is no longer really a choice we have a species.
Even if our leaders are reluctant to face this massive problem head on and address the ethical questions, but more importantly the issues of safeguarding, then we must take that baton up for ourselves.
During the time of the pandemic social media feeds were held back, posts were removed, and accounts suspended and so without doubt we have at least the beginning of very clear evidence that safety and personal accountability for post and online behaviour is possible.
As I am writing this ending to Chapter 4, I am half listening to a vlog post which is about Child exploitation online in the U.S.A.
In short there is a passionate discussion about the links between social media use and mental health issues
Of course, the owner of one Social Media companies is reluctant to accept that there is any link, or any firmly established scientific link with mental health issues and we really have to ask ourselves as the ‘adults’ why these established links have not yet been made officially.
A whistle blower from the company in question employed to study these mental health issues reported back that a sizable % of this particular demographic had suffered significant harm, with issues related to body image as well as anxiety and depression.
Of note 37% were exposed to explicit images of nudity
24% experienced unwanted sexual advances
17% exposed to self-harm content.
(If we are considering a teenage group we must at least entertain the probability and likelihood that many of these people affected and carefully groomed to begin with, to the extent that they believe they somehow ‘deserve to be sent those pictures, are in a relationship with their abuser and so its love and not abuse, as well as those too afraid to tell the truth and who won’t come forward to begin with. In other words, these figures are likely much higher than those reported officially)
I myself have been experimenting over the past five years on social media.
I have become particularly and increasingly interested in my own social media feeds and how they have changed dependent upon what comments I have made on other people’s posts and what social posts I have made personally.
The domain of the Algorithm apparently
I have dabbled with blogging and vlogging during this time too and this has also impacted my feeds as well as some of the people I have interacted with, which at times has felt ‘unhuman’ as if it might well be a bot of some description and at other times is has felt quite malicious as in a human reaction.
It has been difficult for me to explain to others that I suspect my feeds are manipulated to cause a reaction in me without sounding like a complete lunatic.
If I can’t find the language as an adult to describe the apparent manipulations of my feed which to any average human being may appear ‘coincidental’, then how much more difficult it must be for a teenager to articulate the same.
How many ‘coincidences’ should we expect before we can seek help?
I also like to journal and do this on my laptop and have also been convinced at certain stages that my own private journal word documents must have been accessed by either a computer algorithm or hacker as my feeds have also been greatly affected by what I have written.
I make personal video diaries too as I have been experimenting with the idea of being my own therapist, or at least the potential for an algorithm of the future to find appropriate posts and information for all of the personal things I discuss…………… I mean this has huge potential in a beautiful world.
Of course, I have remained unaffected, but it has become clear to me on several occasions that either the algorithm or the person responsible for managing those, or even someone capable of hacking those could potentially cause me and anyone else for that matter a significant degree of psychological harm.
In my own opinion someone far more intelligent than me knows about this potential to manipulate social media feeds
Another story idea, a blockbuster movie, this social media feed manipulation is some new warped kind of abuse for those with a penchant for enjoying watching the suffering of others or to simply try to break any person’s spirit.
Perhaps it’s a multibillion-pound industry even of itself.
Exclusive elite members only
Yes I’ve played around with a story idea for all of the above
It’s almost like the Hunger Games, but the psychological version.
Super rich people pay to watch someone being psychologically abused and groomed and bet on whether the person will conform, become petrified, be talked into illicit acts, go crazy, or even kill themselves.
Huge money is paid to manipulate a persons feed and surroundings.
It’s like a billionaire’s club big brother, where the contestants don’t know they are being watched or manipulated.
We often confine abuse to one specific area for example paedophiles who like to sexually abuse children.
We seem to forget that anyone capable of one type of abuse is usually more than fluent in all of the ways to abuse a human being.
Emotionally, mentally, spiritually, physically, financially and any other ingenious way that they can conceive of
Its power and control and domination that they seek and will revel and get off on a whole variety of abuses.
Its part of the overall game for them
Even in other realms of criminal manipulation this kind of thing has potential as in whether such psychological terrorism can cause people to harm others and commit acts such as mass shootings or become terrorists themselves.
So once again, I point out why it is not only ‘nice’ that humanity becomes fully conscious, but it is also actually necessary.
None of these issues are the result of a ‘bad’ Ai, or ‘evil’ computer technology, it’s like I keep saying the person inputting and programming gets to say how the equipment functions.
Perhaps it is also fair to say that depending upon the success of this programming, also gets the final say in how we function too.
I mean when you stop and realise that even social media has the potential to cause mental health issues.
We could all already be heavily influenced and not even realise it.
Sometimes I equate our media fixation as a species with a drug addict who becomes aware that his habit is destroying his life and seeks to quit, except he replaces his first habit with another one because he hasn’t addressed the issue of why he became addicted in the first place.
Many people turned away from mainstream media during the pandemic but replaced the void with the only available alternative, without ever really looking at their own thinking patterns, emotional responses and need for this filler to begin with
Ai ethics is not an unreasonable consideration to be making at this stage and arguably is happening far later than it should have.
Safeguarding and a robust weights and measures is definitely required for Ai, ironically not for the Ai itself, but more for the purposes of ‘biased’ programming, which could be purposeful and could be accidental.
Perhaps a final moment to mention our educational systems again and the fact that simply changing our qualification accreditation systems is no longer enough to educate our children.
Cognitive behavioural therapy is also required as a subject and perhaps for the first time in history education will finally be steered in a way that it starts with the pupil.
That before any outside learning can be successful a mastery and reverence of our own being is the foremost building block.
In other words, the human being receiving the education is viewed as its own equipment which comes before even pens and paper or gadgets.
That this human being needs to be working and functioning to the best of its own personal capabilities to even be able to exist in our very modern world
Ai was the hardest chapter so far simply because I knew nothing about it............ now I know a centimetere of infinity about it
But we really need to understand it
Each and every one of us!
I would so appreciate any feedback, especially ctritiques
Have a beautiful week and dont forget to check back in later this week for the newest Teabreak with Tasha video.
See you soon
Always from Love
Your Natasha
Share this post:
Join my email list to receive updates and information.
Good Grief Galaxy is proud to drive change on behalf of the community and is a passionate advocate for free speech
Good Grief Galaxy delivers free weekly blog posts and teabreak videos and also shares ideas and goals which may improve everyday life for the many
You can become a supporter of the Good Grief Community by making a contribution via patreon which will help to keep Good Grief Galaxy free as well as play a vital role in developing its full potential
https://www.patreon.com/GoodGriefGalaxy
Aletrnatively you can buy me a coffee here, id certainly appreciate it
https://www.buymeacoffee.com/uktpu1w
Merchanidise and exclusive content to follow over the coming months
Thank you
Never miss a blog post and show your support
We love our customers, so feel free to visit during normal business hours.
Open today | 09:00 – 17:00 |