--- Welcome to Stacey Vassallo, our new HR Partner for Computing starting May 1. Thank you, Jeff Artel, for your years of service!

--- FIFE Notes April newsletter is now available at http://fife.fnal.gov/. Articles include: “SC-PMT review 2017,” “New OSG resources from the 2017 AHM,” “Ways to improve your life: POMS updates,” “How to make datasets and influence storage: SAM4Users,” “What to expect when you’re registered for the fifth annual Fife workshop,” “The art of efficiency” and “Best in Class” examples in distributed computing.

New employees

Kassandra Galvan (OCIO/Governance/Administrative Support)

Deepika Jena
(SCD/Systems for Scientific Applications/Scientific Computing Simulation/Physics and Detector Simulation)

Daniel Szkola
(SCD/Scientific Computing Facilities/Data Movement and Storage/Storage Services Administration)

Brandon White (SCD/Scientific Computing Services/Scientific Data Processing Solutions/Scientific Data Management)

Congratulations!

April anniversaries
(5, 10, 15 & 20+ years)

Phil Demar- 32 years
Darryl Wohlt- 32 years
Tim Doody- 26 years
Daniel Elvira- 26 years
Krzysztof Genser- 26 years
Robert Atkinson- 25 years
Cheri McKenna- 25 years
Penelope Constanta- 22 years
Ray Pasetes- 20 years
Adam Lyon- 15 years

Andy Romero
(Enterprise Services Operations/Storage and Virtual Services)

Prior to working at Fermilab, I worked for the United States Marine Corp as an aircraft electrician and for Motorola. In 1987, I started at Fermilab in the Research Division (now known as PPD). I joined Computing in the mid-1990s to manage Windows and Macintosh desktop and server systems.

Currently, I specify, install and maintain the storage systems that are part of the CCD computing architecture foundation. These storage systems provide block and file storage to other CCD services (virtualization, web services, Teamcenter, etc.) and to our customers across the lab (engineering, scientific and administrative/financial).  I manage all aspects including security, capacity, performance and lifecycle. Recently, lifecycle management has taken center stage. It includes the decommissioning of old, unsupported systems and ensuring that each data type is moved to the optimal storage service based on factors such as performance, availability, manageability and cost.

Last year, we migrated all content from the AFS file service to the central NAS-based file service. This year, we are working with our partners in SCD to refocus how they use the central NAS file service. We will be removing the "big data" used by grid/cloud batch computing services from the NAS, and SCD will store this data in dCache. Of course, the NAS file service will still be available for various SCD interactive computing needs.

In the next year, we hope to begin focusing on performance improvement by adding a solid-state drive tier to our NAS (file) and our SAN (block) storage facilities. We also hope to investigate using object storage as a low-cost archival tier.

When not at work, I enjoy spending time with my family.

From the CIO: Status of Labwide Budget Planning System
Rob Roser
Chief Information Officer Rob Roser

I thought I would use this month’s column to discuss the status of the labwide Budget Planning System. BPS is currently our highest-priority project and one that will significantly change the way Fermilab plans and executes its budgeting process. While we talk about it as “our” project, BPS is really a joint effort by Computing and Finance, with Finance having ownership and ultimate responsibility of this system.

First, some background. As some of you know, Fermilab struggles when it comes to budget planning. Each division has its own tools and processes, and there is no single source of truth for the data, making scenario planning challenging. In addition, leadership lacks visibility into each other’s division or section budgets. Consequently, there is a lack of uniformity in how we calculate budgets and an inconsistent level of detail across organizations. The new BPS system is meant to address these issues and more.

The BPS project consists of three broad categories of work: integrating our various data sources into the tool; configuring the tool to follow our workflows; and managing organizational change, which includes creating documentation, conducting training and communicating to stakeholders. Along these lines, we broke the project into two releases. (There are potentially more releases in the future with additional categories of work, but none are planned today.)

Release 1, completed in February, loaded financial and HR data from FY14 to the present into the tool and enabled the Finance Office and senior field financial managers (FFMs) to run reports against this data. A new production process now loads new data into the system weekly. Release 2 will enable us to execute the budgeting process as “one lab,” loading all the large project data (Mu2E, LBNF/DUNE, CMS Upgrade...) into the system and completing its configuration for our workflows.

This project has taken longer than we had hoped and anticipated. We started with a charter and RFP in late 2014, selected a tool May 2015 and selected an integration partner in September 2015. We struggled with the tool because its cloud instance was not as mature as advertised, and we lost a lot of time debugging. However, the larger hurdle has been integrating and reconciling the data between the various data sources. We have spent significant effort between Computing and Finance doing data validation, understanding the cause of inconsistencies and fixing the data.  

While we finish project implementation, our current plan is to move forward with the 2018 budgeting process using our already established methods. We are now projecting that we will complete Release 2 in early 2018. This date includes significant contingency for data validation and user acceptance testing. Once Release 2 is complete, a focused effort will be made to load the FY18 budget in BPS and then use it to manage the budget throughout FY18.

While this situation is less than ideal, we have a solid plan and are progressing towards the end goal. Thank you to everyone on the project team for your hard work. 

~ Rob

Gerard Bernabeu Altayo
(Scientific Computing Facilities/High Performance Parallel Computing Facilities)

I joined Fermilab almost five years ago to work on FermiCloud and FermiGrid. Since then, I've had a few roles within SCD through which I’ve met most of you!

These days, I work within the HPC department, where we run the Lattice QCD clusters and the Wilson cluster, a development cluster with a broad variety of computing architectures interconnected with a low latency InfiniBand network to experiment and optimize codes. In my role as computing services architect, I evaluate potential computing and storage solutions for our experiments' ever increasing needs. This means I get to learn and play with some very cool technology!

One of my recent projects has been to evaluate and benchmark a new file system on Linux, ZFS, on JBODs (”just a bunch of disks”) as the next-generation building block for SCD's distributed storage solutions. We already use ZFS and JBODs broadly under Lustre file systems and for some high-performance Network File System (NFS) servers in the HPC department. If everything goes well, we may see these technologies in other areas within the division soon.

In my free time, I enjoy cooking and travelling as much as I can.

--- Gerard Bernabeu Altayo, Bonnie King, Art Lee, Marco Mambelli and Jessie Pudelek represented Computing at the annual Fermilab STEM Expo on April 19.


OCIO's Art Lee and Jessie Pudelek talk with students at the 2017 STEM Expo. Photo credit: Reidar Hahn

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

--- The Fermilab Library hosted a group of students from the College of Dupage on April 18. Kathy Saumell, Mary Cook, Teresa Graue Witt and Valerie Higgins presented on the various library services.