Drug research, mRNA research acceleration, etc… Are using specialized neural nets trained in highly controlled environments with highly controlled data.
All types of neural networks are pattern recognition. Medicine and biology has a somewhat** unique in being heavily pattern dependent and finding unique patterns in a known dataset large part of discovery. This contracts with other scientific areas where it will use inspirations from datasets, but the invention is much more abstract. That makes neural nets perfect for assisting those types of research.
LLMs are trained on language pattern recognition. The simplistic view is “with the past X words entered, the probability of Y being the next sentence is the highest”. Where it actually has no idea what it is talking about. That is why the first LLMs were so horrible and bullshit (but at least in the early days they still were allowed to say “I don’t know”). Things have evolved rapidly and they are trained slightly differently and given MUCH more context around the sentences, but the core models of LLMs are the same and there are many many tests out there that show that the models can’t actually reason and don’t understand what they are saying outside of the very narrow context they are given which is why there are more useful as simple transcribing tools, or search (though they just lie if they can’t find the answer anyway). That is also why they are very good at copying peoples’ voice patterns in order to scam people.
They are also literally starving entire populations of water just so people can draw a bad sketch of a car and tell chatGPT “make this into a 3D drawing with a cat on top”
That is a different type of AI/machine learning.
They aren’t using Large Language Models for that.
Drug research, mRNA research acceleration, etc… Are using specialized neural nets trained in highly controlled environments with highly controlled data.
All types of neural networks are pattern recognition. Medicine and biology has a somewhat** unique in being heavily pattern dependent and finding unique patterns in a known dataset large part of discovery. This contracts with other scientific areas where it will use inspirations from datasets, but the invention is much more abstract. That makes neural nets perfect for assisting those types of research.
LLMs are trained on language pattern recognition. The simplistic view is “with the past X words entered, the probability of Y being the next sentence is the highest”. Where it actually has no idea what it is talking about. That is why the first LLMs were so horrible and bullshit (but at least in the early days they still were allowed to say “I don’t know”). Things have evolved rapidly and they are trained slightly differently and given MUCH more context around the sentences, but the core models of LLMs are the same and there are many many tests out there that show that the models can’t actually reason and don’t understand what they are saying outside of the very narrow context they are given which is why there are more useful as simple transcribing tools, or search (though they just lie if they can’t find the answer anyway). That is also why they are very good at copying peoples’ voice patterns in order to scam people.
They are also literally starving entire populations of water just so people can draw a bad sketch of a car and tell chatGPT “make this into a 3D drawing with a cat on top”
I’ll say it again… the person I replied to was generalizing machine learning, not specifically LLMs