Checkmite
Skepticifimisticalationist
Earlier in the thread I posted about the idea of combining ChatGPT with WolframAlpha to make something capable of answering questions with mathematical components more accurately than just GPT alone.
Scott Aaronson recently had a published a paper in which he does just that. Here's his discussion from his blog:
https://scottaaronson.blog/?p=7460
Click through to see the example problems.
There's more discussion of takeaways at the link.
A LLM as a "base application" with addon modules to give specific functionality sounds like the right way to go.
But that doesn't seem to be what AI makers (and fans) "want"; Open AI for instance really seems to want ChatGPT to eventually be all things for all people, so much so that they expect people will actually want to pay more for a pared-down and narrowed-scope version of it, a la the GPT Store.