• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

And...?

Are you suggesting that there is no analogue for computers which serves the same functions that hormones do in the human body?

My thought exactly. If an AI found a “hormone” function beneficial, it seems like an analog could be generated in code.

Or it may consider that, and “decide” that hormones simply gum things up for humans, and would stand in the way of “pure” intelligence.
 
Last edited:
My thought exactly. If an AI found a “hormone” function beneficial, it seems like an analog could be generated in code.

Or it may consider that, and “decide” that hormones simply gum things up for humans, and would stand in the way of “pure” intelligence.


Hormones do a lot more in the human body than just affect emotions. In many cases that's almost a side-effect.
 
It certainly needs to be planned for in advance, but no AI is required. Such shut-offs could easily be automated with technology we've had for decades. And they would be equally problematic

People need to quit blaming their tools.

There was a little ditty that popped up way back when word processing systems first hit corporate offices. It was called;


The Secretary's Lament.​

I really hate this damned machine.
I wish that they would sell it.
It never does just what I want,
but only what I tell it.​


Things really haven't changed all that much in the ensuing half century.
All true, and all basically down to the management of human beings over their machines. But I suspect that the smooth and autonomous operation of AI has a great potential for letting people become lazy about oversight, and while in one sense we can't blame AI for that, we can acknowledge the situation in which it does what it does. Just as we can't exactly blame self-driving cars for running over cyclists when their drivers fail to keep watch, but we can also remember that no self-driving car did anything when there were no self-driving cars.

(poetic thread drift....)

https://scienceblogs.com/worldsfair/2007/01/03/discovery-a-polish-poem-1
 
All true, and all basically down to the management of human beings over their machines. But I suspect that the smooth and autonomous operation of AI has a great potential for letting people become lazy about oversight, and while in one sense we can't blame AI for that, we can acknowledge the situation in which it does what it does. Just as we can't exactly blame self-driving cars for running over cyclists when their drivers fail to keep watch, but we can also remember that no self-driving car did anything when there were no self-driving cars.
[snip]


And remember, as well, that no human-operated cars did anything when there were no cars. (Aren't truisms fun? :p)

It doesn't hurt to keep in mind that the issue is not whether self-driving cars are dangerous, nor is it even whether human-driven cars are dangerous, but rather which are more dangerous, and will they remain so.

Autonomous cars aren't quite there yet, I'll agree with that. But considering the rate of improvement in the industry, I may even live long enough to see when they get there.
 
And remember, as well, that no human-operated cars did anything when there were no cars. (Aren't truisms fun? :p)

It doesn't hurt to keep in mind that the issue is not whether self-driving cars are dangerous, nor is it even whether human-driven cars are dangerous, but rather which are more dangerous, and will they remain so.

Autonomous cars aren't quite there yet, I'll agree with that. But considering the rate of improvement in the industry, I may even live long enough to see when they get there.
Indeed, and my point here is that if and when autonomous cars become more consistently reliable than human driven cars, it will likely be because the developers thought seriously about the possible pitfalls including novel dangers, and dealt with them proactively, and did not just think "we'll solve that problem later if it shows up."
 
Indeed, and my point here is that if and when autonomous cars become more consistently reliable than human driven cars, it will likely be because the developers thought seriously about the possible pitfalls including novel dangers, and dealt with them proactively, and did not just think "we'll solve that problem later if it shows up."


I'm not sure why you think they aren't already doing their best to do just that.
 
They may be, but I'm more speaking to some here on this forum (arthwollipot in particular) whose approach seems to be the familiar idea of net gain, that technology will fix the problems later when we figure out what they are.


I expect that "technology", or rather the developers of such will anticipate as many problems as they can and do the best they can to prepare for them with the tools available.

But, inevitably, there will unanticipated problems no matter how hard they try to avoid them, and those are the ones which will be addressed with later technology.

This is how the process works. This is how it has always worked.

Engineers know this. They anticipate everything they can, and know that it won't matter. Something else will come up.

Are you familiar with the collapse of the 1940 Tacoma Narrows Bridge? If not then watch this film clip. It's a classic. Only 2:29 minutes long, but it is engraved in the minds of every single engineering student since it happened. It has affected the designs of every suspension bridge built since then. Entire subdisciplines in engineering have developed as a result.

It is instructive of this very concept. Some of the most important lessons are learned when things break unexpectedly. Software design is no different.

Also, I don't understand what your beef with arth is, but I think you misjudge him.
 
Last edited:
Maybe I'm just feeling a little like an old curmudgeon today, but it seems that in some arguments of this sort people have introduced "what if" scenarios, possible ill consequences, and been dismissed as alarmists, their critics conflating the possible with the inevitable, caution with cancellation. Of course some things have happened, and would have happened anyway, and it's too late to stop what started too long ago, but here we are, not only graced with the luxury of living in yesterday's world of science fiction, but also faced with a multitude of consequences that have lives of their own, that we cannot unmake, only try to control. We can't uninvent the nuclear weapons that threaten us, the guns that shoot us, the cars that choke us, the climate change that boils us alive. There are, of course, numerous arguments to be made about how progress of one sort or another is worth the price. But right or wrong, they are rationalizations, because the time for actually evaluating it is long gone.

Here we are embarking on a new technology which is powerful, in some cases wonderful, certainly often entertaining, filled with promise, but also with some inherent potential to get out of hand, to change the way we do things, the way we think of things, who does what for a living, who gets rich and who gets poor, maybe who runs things, maybe who tells whom what to do, maybe how important decisions are made, diagnoses arrived at, utilitarian solutions implemented. It is doing new things in a new way, and even if the end result of considering it all is to say "lets go ahead," the time for imagining what could go wrong, or what could simply become different, and perhaps what could be done in case of unforeseen circumstances, is now.
 
Playing around with ChatGPT 4 (thanks to Bing) I've discovered it does pretty well on generalities but can really screw up on specifics. I asked it to write a systemd unit file for ssh-tarpit (a honeypot program designed to trap malicious hosts trying to login to a system using ssh.)

It first wrote a using for using a different product, endlesssh.

When I mentioned the error, it wrote an updated file using ssh-tarpit, but included a configuration parameter "-c /etc/ssh-tarpit/ssh-tarpit.conf".

I pointed out that ssh-tarpit doesn't use a .conf file. ChatGPT apologized, saying it actually uses a YAML file, and changed the configuration option to "-f /etc/ssh-tarpit/ssh-tarpit.yaml".

Except ssh-tarpit doesn't use a configuration file at all. The only way to configure it is by passing parameters on the command line:
Code:
ssh-tarpit --bind-address=192.168.1.42 --bind-port=2222 \
  --interval=10 --logfile=/var/log/ssh-tarpit.log
 
Playing around with ChatGPT 4 (thanks to Bing) I've discovered it does pretty well on generalities but can really screw up on specifics. I asked it to write a systemd unit file for ssh-tarpit (a honeypot program designed to trap malicious hosts trying to login to a system using ssh.)

It first wrote a using for using a different product, endlesssh.

When I mentioned the error, it wrote an updated file using ssh-tarpit, but included a configuration parameter "-c /etc/ssh-tarpit/ssh-tarpit.conf".

I pointed out that ssh-tarpit doesn't use a .conf file. ChatGPT apologized, saying it actually uses a YAML file, and changed the configuration option to "-f /etc/ssh-tarpit/ssh-tarpit.yaml".

Except ssh-tarpit doesn't use a configuration file at all. The only way to configure it is by passing parameters on the command line:
Code:
ssh-tarpit --bind-address=192.168.1.42 --bind-port=2222 \
  --interval=10 --logfile=/var/log/ssh-tarpit.log
You are quite right that ChatGPT 4 is better at general stuff than specifics. The version that you get with Bing is very good when it can find a text that it can quote from. Developing an entirely new program is bound to have errors, but it is impressive how can do it by extrapolating from other programs.

I tried something similar, but Bing told me that it doesn’t write code. I wonder how you did it.

ChatGPT 3 has no such limitations.
 
You are quite right that ChatGPT 4 is better at general stuff than specifics. The version that you get with Bing is very good when it can find a text that it can quote from. Developing an entirely new program is bound to have errors, but it is impressive how can do it by extrapolating from other programs.

I tried something similar, but Bing told me that it doesn’t write code. I wonder how you did it.

ChatGPT 3 has no such limitations.

Curious. Were you using the chatbot? In order to use the Chat part of Bing you need to be running the Edge browser; Microsoft has coded it so it works only with Edge. The default new tab page for Edge includes a "Search the Web" box, and left-clicking in it brings up a short list of options. The first one is "Hi, I'm Bing. Your AI-powered copilot for the web. [link]Get Started". Clicking anywhere on that test takes you to the chatbot.

The instruction I used was "Write a systemd unit file for ssh-tarpit." It also wrote some helpful PowerShell scripts for me.
 
Last edited:
Curious. Were you using the chatbot? In order to use the Chat part of Bing you need to be running the Edge browser; Microsoft has coded it so it works only with Edge. The default new tab page for Edge includes a "Search the Web" box, and left-clicking in it brings up a short list of options. The first one is "Hi, I'm Bing. Your AI-powered copilot for the web. [link]Get Started". Clicking anywhere on that test takes you to the chatbot.

The instruction I used was "Write a systemd unit file for ssh-tarpit." It also wrote some helpful PowerShell scripts for me.
I was using a specialised iOS app called “Bing” that is essentially an Edge browser. It works like the chat part of the Edge browser on my laptop.

Maybe I formulated the instruction in a wrong way. It is possible that I wrote something like “Can you write the code for a Python program that” etc. Anyway, the exact same instruction worked with ChatGPT 3.
 
Researchers show certain types of input can get several AIs to break the supposed restrictions they are supposed to obey. They claim there's no way to stop it.

https://www.wired.com/story/ai-adversarial-attacks/

That's pretty old. It comes from the fact large language models are not primarily designed to be chat bots. Main way to modify their behavior is adding things to the prompt. There are no levels of access, no user rights. Whatever the administrator can do, user can do (and undo) too.

Not sure it can be completely avoided, as inner working of AIs, and LLMs especially is impossible to understand. Language itself is not exact and full logical paradoxes. We can never be certain how model would react.
But I think it could be greatly improved, if the security was part of the problem specification from the start.
 

Back
Top Bottom