The night actually started out clear, but with the moon up, we started with specoscopy: Cart 1 plate 627. Right after opening the building, we went to move the telescope up and found the MCP unresponsive - we could not even connect to it via telnet. We rebooted it and all was well. Today, Dan is driving on his machine while Scot is operating Landru in multi-screen mode, but with a DIFFERENT Xserver than previously. Earlier in the evening, he got the old Xserver to die within 10 minutes of logging in. The new Xserver runs a different instance of the window manager in each screen and does not allow for dragging of windows across screens. But it does mean any window that pops up from another window will be on the same screen as the parent. We'll run with this and see if the X crashes go away. (Most of the time, btw, the "crashes" means the Xsession dies and we are returned to the Xdm login prompt - although more severe crashes have also been experienced. Cart 1 - plate 627 required a 15" E offset to find guide stars. We ware running with the 72 point pointing model. Dan recommends running with this model and applying a 15" E offset if you can't find stars. Last night, the same plate with the same pointing model found the stars right away. What'sup with that? We did reboot the MCP today, though. Combined with last night's data, plate 627 was finished in two exposures and we had about an hour to go before the moon went down. Since it was still clear (and seeing still excellent), we decided to use this time to switch instruments so the moon wold be down by the time the imager were ready. If we beat the clouds, we could always do an astrometric calibration scan. We did beat the clouds and our top priority strips did not have UCAC coverage, so we decided to set up on a FASTT field (field N) for initial setup and our needed astrometric calibration scan. Dan checked collimation for out of focus donuts and they looked good. In focus, we got seeing as good as 0.78! Focus value was about -485. Clouds came in shortly thereafter and the seeing got worse. Sigh. At least the imager appeared to be working! Skippy run during this scan from the logTool apparently failed in astrom, but skippy run "by hand" worked fine (the nuErr was ~5"). You can look at the astrom murmur output to debug things.... At 22:41, the TCC axes "halted for a problem" (according to the watcher). Other errors from the watcher were: Modu="prt Read" (Aug 22 22:40:52 MDT) Modu="axe o Move" (Aug 22 22:40:52 MDT) Modu="axe i SchMove" (Aug 22 22:40:52 MDT) Modu="exe Track" (Aug 22 22:40:52 MDT) Modu="prt ReadMatch" (Aug 22 22:40:52 MDT) At this point, our FASTT field was 2 hours over, so we went to FAST field O. (The clouds were also thickening - they came in just as we mounted the imager - of course....) The clouds and seeing improved shortly thereafter (by around 23:00). We got enought FASTT field data by 23:45 and went off to our first science scan (!!) - stripe 76 S - starting at lambda=133 (science to start by 139). No large focus changes or telescope offsets were needed, so all this run contains good data. (This is run 2507.) At this point, though, seeing was marginal. It later became quite good! Skippy had a hard time (never did) finding enough stars in this scan to get accurate results. skyGang was of no help as it revealed no catalog stars (unless it was showing no real stars, but it claimed to find several hundred, so that seems unlikely - at any rate only red objects were shown). ltMatch revealed our rotator position was OK, but we have no info on nu. Skippy occasionally ran and gave us 4 stars with OK results. This was not terribly re-assuring, though. It looks like the teamster may have died around 01:50 as we got a lot of no gang transfer errors. A teamster -force was accepted OK and seemed to work. We then noticed that the gangs had stopped sowing up again. We continued to get "no gang transfer" watcher errors, so we re-started a teamster again! However, this did not seem to solve the problem. We saw no new gangs transferred. We stared the teamster again (hey - who knows what else to do?). This did not seem to help the situation, though. poolDir showed many gangs with flags g and c set- presumably (?) for gang and complete. ad1 gangs prior to frame 157 also had the a (archived) flag set, presumably because they have been transferred successfully to Unix, but frames after 157 did not have the a flag set (consistent with us not getting any gangs since then). id1 showed all the gangs with flags gca set - presumably because these gangs are archived to tape. We ran gangs2Unix 1 in an attempt to get the gangs over ourselves. Nothing happened, but the murmur log was giving us messages like: Aug 23 03:13:09 sdsshost TMS 163469 TEXTONLY Wainting for command response from id5 (crate 2, ICC 1), data server (client 1) We could find no other problems with id5. A few small clouds appeared to the North around 02:30. By 02:40, clouds started apearing more in the W. It looks like we cleared our end lambda just as clouds began encroaching a bit to close for our liking. The last couple frames may be contaminated by clouds - certainly by frame 336, we had clouds. We ended the run at lambda=-177, frame 339. We grepped the murmur log and found some intriguing error messages which seemed to indicate the DA lost the ability to write to sdsshost's shared memory. Knonwing endNight would fail miserably if this wasn't fixed, we decided to let the normal archiving finish and reboot the crates to see if the connection was re-established. After 3 reboots, things worked. The error messages we found (by doing a grep TMS on the murmur log) are in the problems section below. Clouds remained so we closed, then noting at the MCP laptop's tLatch murmur window that the alt axis was some 12000 ticks off. This amounts to almost 3' and was confirmed when the locking pin did not go in when the telescope thought it was at 90deg. Looking at the murmur log, we see this error first occurred around 21:34- just when we went to instChange from spectroscopy to imaging. The previous alt fiducial crossing as we slewed to our spectroscopic field was OK. We did set the fiducials and verified things worked) at the beginning of the night, but alas, were not watching the tLatch murmur messages while we observed. Shouldn't the watcher have shown us some "correction too large" error or something? We do not remember seeing such things. While we were endNighting, we got a critical error on sp1 exic micro. We did an iack and it went away. What happened to sp1? endNight is running successfully as I finish this log. This was my first night with logTool and I was actually quite impressed. The main problems were: 1) cutting and pasting was haphazard at best- sometimes it worked, other time it didn't - I could not figure out why or how (running fvwm2). 2) the scroll bars shapes are misleading and sometimes not easy to control precisely 3) I exited the IOP that I called logTool from and it killed my logTool, losing some of my recent edits. Yes, I should not have done that, but it should not have done that either. But- in general, I found it fairly easy and straight-forward to use, but the cut/paste problems are huge. Editing is a bit of a nuisance in tk's editor, but it's workable. Is it possible, say, to run VI in a TK text widget? I also ran all night with a new Xserver on Landru- the XiGraphics commercial server instead of the XFree86 server we were running. I crashed XFree86 within 10 minutes of logging in earlier tonight- XiG lasted all night without problem. (It's a slightly different interface, but one we can use effectively, I think.) Hopefully, this will fix our Landru troubles.