Dear Users... (A thread for Sysadmin, Technical Support, and Help Desk people)

Status
Not open for further replies.
I know. No problem with proper call logging. In fact, I insist on it. But when a caller says "I'm L2 support for this technology and have already been through these tests, etc. May I talk to an L2 tech, please?", it is a waste of time for both ends having to go through ALL the "turn it off and on" steps again just to tick their boxes.

In many cases, I have been perfectly satisfied with an L1 response like: "Ah I see you seem to understand the tech. This is probably due to ongoing issue XXX. We are working on that currently, with an estimate of YYY-time delay. If it doesn't clear after YYY-time, call back and we will take it from there. Otherwise, how can I help?"

I have to note that having been on both sides of support, this probably isn't advisable either. In theory it makes sense--if it's clear that the user is familiar with basic troubleshooting and says they've performed it, it would be more efficient to escalate right away.

On the other hand, I must admit that even being exhaustively familiar with basic troubleshooting, when I have a problem that seems to stump me, about three out of four times basic troubleshooting that I decided was irrelevant resolved the issue anyway. Someone less polite than me (or who did not have a spouse with more common sense than I have) could badger their way past basic troubleshooting with level one, and wind up wasting level 2 time, and their own. So I understand why lots of places insist on having the level 1 stuff performed with staff on the line before escalating.

That said, if there's a known issue, it is stupid for L1 to put a caller through troubleshooting and only resolve it as an outage through escalation. Known outages and how to screen for them should be critical information Level 1 has available from Level 2 and be part of the process.
 
Last edited:
Grrr. The project I'm currently working on is to develop a single authoritative source for certain critical numbers-- the much vaunted (and mentioned) "source of truth" to replace the multitude of conflicting reports the division currently uses.

So what's the latest ask? One party wants to have different criteria for their version of the One True Data. So there will now be Two True Data, conflicting with each other, and both will be considered "the source of truth".

I'm the only person involved in this who sees this as a problem.

Were I in your shoes I would be tempted to determine which is truly the original data, and mark it so in documentation, even if one audience likes to think of a variant as a source of truth. Then they can call it what they want.
 
I have to note that having been on both sides of support, this probably isn't advisable either. In theory it makes sense--if it's clear that the user is familiar with basic troubleshooting and says they've performed it, it would be more efficient to escalate right away.

On the other hand, I must admit that even being exhaustively familiar with basic troubleshooting, when I have a problem that seems to stump me, about three out of four times basic troubleshooting that I decided was irrelevant resolved the issue anyway. Someone less polite than me (or who did not have a spouse with more common sense than I have) could badger their way past basic troubleshooting with level one, and wind up wasting level 2 time, and their own. So I understand why lots of places insist on having the level 1 stuff performed with staff on the line before escalating.

That said, if there's a known issue, it is stupid for L1 to put a caller through troubleshooting and only resolve it as an outage through escalation. Known outages and how to screen for them should be critical information Level 1 has available from Level 2 and be part of the process.
It's why I made an outage as a "for example".

There may be any number of reasons issues occur. And in most well-run shops, IT are well aware of these before users are, and are working hard to negate them when the calls ramp up. So what they really don't need just then is having their valuable support resources diverted to run a whole bunch of folks through unnecessary L1 testing.
 
Were I in your shoes I would be tempted to determine which is truly the original data, and mark it so in documentation, even if one audience likes to think of a variant as a source of truth. Then they can call it what they want.

Ain't nobody got access to the original data, I'm the closest at three steps removed.
 
On the other hand, I must admit that even being exhaustively familiar with basic troubleshooting, when I have a problem that seems to stump me, about three out of four times basic troubleshooting that I decided was irrelevant resolved the issue anyway. Someone less polite than me (or who did not have a spouse with more common sense than I have) could badger their way past basic troubleshooting with level one, and wind up wasting level 2 time, and their own. So I understand why lots of places insist on having the level 1 stuff performed with staff on the line before escalating.
We go through the obvious steps because the obvious steps are the ones most often overlooked.
 
So I had an exciting morning. For some reason, almost all of the appointments had been double-booked, so I had twice as many people to deal with than I was prepared for.

Fortunately the process only takes about five minutes when the printer is fully warmed up. Unfortunately it can take about five minutes to warm up.

But I powered on through it and got it all done, including wrapping up a few anomalies that have been bugging me all week. I am unflappable. Other people get flapped, but flapping is not for me.

All up, I think I've had a really good week.
 
Jobs down. The culprit is an inefficient, poorly structured program.... written by a mate and the nicest person in the office... stepping very carefully
 
remember we had those database performance issues and I had to get all stroppy and our company changed how they worked with the service provider and only then did we get a fix?

Well, the service provider has overwritten the fix.

At least twice.

How's that off-shoring going for you?
 
Why do so many of my users:

A) Do their taxes on their work PCs
B) In any way shape or form think it makes their tax returns in any way my job if they run into issues doing it?
 
Last edited:
Why do so many of my users:

A) Do their taxes on their work PCs
B) In any way shape of form think it makes their tax returns in any way my job if they run into issues doing it?

It’s on a computer, you are the computer guy, therefore anything to do with a computer is your problem that you need to fix it!
 
remember we had those database performance issues and I had to get all stroppy and our company changed how they worked with the service provider and only then did we get a fix?

Well, the service provider has overwritten the fix.

At least twice.

How's that off-shoring going for you?


Don't you work on IBM mainframes? Aren't the fixes in SMP/E format with prereq info and such? I never worked with non-IBM products on MVS/zOS.
 
Why do so many of my users:

A) Do their taxes on their work PCs
B) In any way shape or form think it makes their tax returns in any way my job if they run into issues doing it?


Hmm, suggest to management that if people are doing tax returns on their work PCs that they might hold the company liable if there's a problem with Excel say when you update versions etc. In the best interests of protecting the company perhaps such files should be deleted? After all people might be trying to move such files between work and home and bring in viruses that will unleash ransomware, trash the network, set PCs on fire etc.
 
Don't you work on IBM mainframes?

Might do.

"We" don't do any of the DBA/Sys prog stuff ourselves, the "service provider" does all that, so I think "we" struggle to spot poor service. Maybe that naivete is being taken advantage of. Making the client totally dependent is one of the aims of an out-sourcer after all.
 
Look away those of a nervous disposition. I'm going to speak ITIL. Raise an Incident ticket. As it's recurred ask for a Problem record to be raised and ask for the Change records for those updates to be marked as failed changes.
 
Did I mention my daughter’s laptop?

She’s happily been using my 2012 MacBook Pro during her first year at university. It’s a bit long in the tooth but still a very nice machine.

Until about a month ago when it started doing distinctly odd things.

Couldn’t run the standard Apple diagnostics on it because successive OS updates have overwritten the BootROM and the necessary files won’t load.

Local authorised repair shop wanted €350 to open it up and give a quote. Any repairs would be on top of that.

An €80 battery from my usual Mac parts place and 15 minutes with screwdrivers and magnifying glass and it’s running perfectly again. Most difficult part was identifying the screw heads as they’re so tiny and I’m so old.
 
We raised a ticket. No one was following them up, hence the procedural changes I have hinted at. I'm not sure that we're good at SLA's or if we even know what problem records are. It's quite frustrating, but the organisation is too big for me to affect this directly
 
We raised a ticket. No one was following them up, hence the procedural changes I have hinted at. I'm not sure that we're good at SLA's or if we even know what problem records are. It's quite frustrating, but the organisation is too big for me to affect this directly

A big company that is not good at SLAs is a train wreck waiting to happen. I'm not the best at them, so I call in help from both legal and IT when dealing with them. I still lose sleep over them.

Have you ever heard of indeed.com? They help people get jobs.
 
I often recommend people take a 1 day ITIL foundation course as picking up the vocabulary can be useful in bigger companies. I think the term your company needs to check is "underpinning contract" - i.e. what does the contract state in terms of SLAs, MTTRs etc etc.
 
Status
Not open for further replies.

Back
Top Bottom