theprestige
Penultimate Amazing
Okay so LLM would have to be perfect and incapable of any error in order to make up for our ignorance in how they work? That's a pretty unreasonable standard to say the least.
The standard proposed by Dr Martell, for the DOD to make use of LLMs and other "AIs", seems much more reasonable: For a given use case, define a measurable level of reliability, and measure whether the AI in question meets that level of reliability. If it does, the DOD will consider using it for that particular use case. If not, it won't.
In fact Dr Martell says we should be demanding this of AIs in general, and we should be repudiating AI providers that don't offer this degree of confidence.