151-200 of 1491 results (13ms)
2018-08-09 §
10:40 <arturo> T201602 upgrade packages from jessie-backports (excluding python-designateclient) [tools]
10:30 <arturo> T201602 upgrade packages from jessie-wikimedia [tools]
10:27 <arturo> T201602 upgrade packages from trusty-updates [tools]
2018-08-08 §
10:00 <zhuyifei1999_> building & publishing toollabs-webservice 0.40 deb, and all Docker images T156626 T148872 T158244 [tools]
2018-08-06 §
12:33 <arturo> T197176 installing texlive-full in toolforge [tools]
2018-08-01 §
14:31 <andrewbogott> temporarily depooling tools-exec-1409, 1410, 1414, 1419, 1427, 1428 to try to give labvirt1009 a break [tools]
2018-07-30 §
20:33 <bd808> Started rebuilding all Kubernetes Docker images to pick up latest apt updates [tools]
04:47 <legoktm> added toollabs-webservice_0.39_all.deb to stretch-tools [tools]
2018-07-27 §
04:52 <zhuyifei1999_> rebuilding python/base docker container T190274 [tools]
2018-07-25 §
19:02 <chasemp> tools-worker-1004 reboot [tools]
19:01 <chasemp> ifconfig eth0:fakenfs 208.80.155.106 netmask 255.255.255.255 up on tools-worker-1004 (late log) [tools]
2018-07-18 §
13:24 <arturo> upgrading packages from `stretch-wikimedia` T199905 [tools]
13:18 <arturo> upgrading packages from `stable` T199905 [tools]
12:51 <arturo> upgrading packages from `oldstable` T199905 [tools]
12:31 <arturo> upgrading packages from `trusty-updates` T199905 [tools]
12:16 <arturo> upgrading packages from `jessie-wikimedia` T199905 [tools]
12:08 <arturo> upgrading packages from `trusty-wikimedia` T199905 [tools]
2018-06-30 §
18:15 <chicocvenancio> pushed new config to PAWS to fix dumps nfs mountpoint [tools]
16:40 <zhuyifei1999_> because tools-paws-master-01 was having ~1000 loadavg due to NFS having issues and processes stuck in D state [tools]
16:39 <zhuyifei1999_> reboot tools-paws-master-01 [tools]
16:35 <zhuyifei1999_> `root@tools-paws-master-01:~# sed -i 's/^labstore1006.wikimedia.org/#labstore1006.wikimedia.org/' /etc/fstab` [tools]
16:34 <andrewbogott> "sed -i '/labstore1006/d' /etc/fstab" everywhere [tools]
2018-06-29 §
17:41 <bd808> Rescheduling continuous jobs away from tools-exec-1408 where load is high [tools]
17:11 <bd808> Rescheduled jobs away from toole-exec-1404 where linkwatcher is currently stealing most of the CPU (T123121) [tools]
16:46 <bd808> Killed orphan tool owned processes running on the job grid. Mostly jembot and wsexport php-cgi processes stuck in deadlock following an OOM. T182070 [tools]
2018-06-28 §
19:50 <chasemp> tools-clushmaster-01:~$ clush -w @all 'sudo umount -fl /mnt/nfs/dumps-labstore1006.wikimedia.org' [tools]
18:02 <chasemp> tools-clushmaster-01:~$ clush -w @all "sudo umount -fl /mnt/nfs/dumps-labstore1007.wikimedia.org" [tools]
17:53 <chasemp> tools-clushmaster-01:~$ clush -w @all "sudo puppet agent --disable 'labstore1007 outage'" [tools]
17:20 <chasemp> tools-worker-1007:~# /sbin/reboot [tools]
16:48 <arturo> rebooting tools-docker-registry-01 [tools]
16:42 <andrewbogott> rebooting tools-worker-<everything> to get NFS unstuck [tools]
16:40 <andrewbogott> rebooting tools-worker-1012 and tools-worker-1015 to get their nfs mounts unstuck [tools]
2018-06-21 §
13:18 <chasemp> tools-bastion-03:~# bash -x /data/project/paws/paws-userhomes-hack.bash [tools]
2018-06-20 §
15:09 <bd808> Killed orphan processes on webgrid nodes (T182070); most owned by jembot and croptool [tools]
2018-06-14 §
14:20 <chasemp> timeout 180s bash -x /data/project/paws/paws-userhomes-hack.bash [tools]
2018-06-11 §
10:11 <arturo> T196137 `aborrero@tools-clushmaster-01:~$ clush -w@all 'sudo wc -l /var/log/exim4/paniclog 2>/dev/null | grep -v ^0 && sudo rm -rf /var/log/exim4/paniclog && sudo service prometheus-node-exporter restart || true'` [tools]
2018-06-08 §
07:46 <arturo> T196137 more rootspam today, restarting again `prometheus-node-exporter` and force rotating exim4 paniclog in 12 nodes [tools]
2018-06-07 §
11:01 <arturo> T196137 force rotate all exim panilog files to avoid rootspam `aborrero@tools-clushmaster-01:~$ clush -w@all 'sudo logrotate /etc/logrotate.d/exim4-paniclog -f -v'` [tools]
2018-06-06 §
22:00 <bd808> Scripting a restart of webservice for tools that are still in CrashLoopBackOff state after 2nd attempt (T196589) [tools]
21:10 <bd808> Scripting a restart of webservice for 59 tools that are still in CrashLoopBackOff state after last attempt (P7220) [tools]
20:25 <bd808> Scripting a restart of webservice for 175 tools that are in CrashLoopBackOff state (P7220) [tools]
19:04 <chasemp> tools-bastion-03 is virtually unusable [tools]
09:49 <arturo> T196137 aborrero@tools-clushmaster-01:~$ clush -w@all 'sudo service prometheus-node-exporter restart' <-- procs using the old uid [tools]
2018-06-05 §
18:02 <bd808> Forced puppet run on tools-bastion-03 to re-enable logins by dubenben (T196486) [tools]
17:39 <arturo> T196137 clush: delete `prometheus` user and re-create it locally. Then, chown prometheus dirs [tools]
17:38 <bd808> Added grid engine quota to limit user debenben to 2 concurrent jobs (T196486) [tools]
2018-06-04 §
10:28 <arturo> T196006 installing sqlite3 package in exec nodes [tools]
2018-06-03 §
10:19 <zhuyifei1999_> Grid is full. qdel'ed all jobs belonging to tools.dibot except lighttpd, and tools.mbh that has a job name starting 'comm_delin', 'delfilexcl' T195834 [tools]
2018-05-31 §
11:31 <zhuyifei1999_> building & pushing python/web docker image T174769 [tools]
11:13 <zhuyifei1999_> force puppet run on tools-worker-1001 to check the impact of https://gerrit.wikimedia.org/r/#/c/433101 [tools]