A while back I had the “fortune” of having to deal with replacement of a failed disk inside a disk array of some HP-UX machine. Never mind lack of any information regarding the hardware setup (dual controllers, multipathing, etc.)… The biggest problem was identifying the failed disk. Oh, and throw some LVM into that…

Of course, I had all drives’ LED’s green. That was no help. The machine was live (yes, there was a hot standby), so pulling a wrong disk would be a little disaster…

I came across, what I thought, was a clever use of dd utility to identify the failed disk. I say clever, because at least to me, it was not obvious until then.

When a disk fails, “usually” you either got some red LED glowing somewhere, or there is no green LED lighting up… So, in my case, there would be no disk activity (it was not an I/O intensive server) once I ran dd command on the bad disk. Yes, a few risky assumptions were made, but given the situation - weighting pros and cons, it was the way forward…

bash-3.00# dd if=/dev/dsk/c3t5d0 of=/dev/null

Yeah, it worked…