Announcements

--- SharePoint training this month:
For end-users:

June 21, 10 a.m. - 11:30 a.m.
Sign up

For designers: (prereq: end-user training)
June 21, 1 p.m. - 2:30 p.m.
Sign up

CERN to use Synergia for beam dynamics simulations

Synergia graphic
Horizontal phase space in the GSI SIS18 Synchrotron showing resonance islands. Click for large image. Image: Jim Admunson

 

 

 

 

 

 

 

 

CERN scientists will use software developed at Fermilab to help simulate collective effects--the impact that particles in a beam have on each other--on beam dynamics in accelerators as they upgrade the LHC.

The LHC high luminosity upgrades will include the LHC Injector Upgrade (LIU) project. The injectors to the LHC operate at much lower energies than the LHC itself making them vulnerable to collective effects, including space charge. Consequently, in an effort to understand how these collective effects impact the beam, last fall CERN initiated a project to find simulation packages to use in preparing for the upgrade. Synergia, a code developed by Fermilab’s Computational Physics for Accelerators group under the Community Project for Accelerator Science and Simulation (ComPASS), is one of the two codes currently being benchmarked for this purpose according to their ability to reproduce confirmed results from another machine.

Synergia is a sophisticated version of a particle-in-cell (PIC) code; these codes calculate space charge from a set of macroparticles representing the real number of particles. Synergia is self-consistent because this type of simulation means it includes the impact of space charge on the beam in its calculations. In contrast, other codes use assumptions about the behavior of the beam, but do not take into account the impact that the space charge would have on those presuppositions. If the space charge proved to be large, therefore, their calculations would no longer be accurate.

At an April CERN workshop, Synergia’s results were presented by Eric Stern and Jim Amundson of the Computational Physics for Accelerators group in the Scientific Computing Division. Amundson remarked that “We managed to get the most accurate results of any PIC code so far.”

[View presentation, "Certifying Synergia for CERN Accelerators"]

~Clementine Jones

Outreach

If you have an outreach activity, a presentation for the FCC lobby display or questions about how to get involved, please contact Ruth Pordes or Margaret Votava.

--- Now Playing in the FCC lobby

Continuous Service Improvement Process: Make it Sustainable  - Tammy Whited

Problem Management: Facing Challenges And Advocating for Success - Gerald Guglielmo

Physics and Detector Simulation Projects in the Scientific Computing Division - V. Daniel Elvira for the DS-ADSS group

Fermilab Overview - Young-Kee Kim, Fermilab Program Advisory Committee Meeting, June 4 , 2013

Milestones
Person on mountain summit

Welcome, new employees!

Narasimha Nunna (CCD/Enterprise Services/Database Services)

Geoffrey Cluts (CCD/Service Operations Support/Desktop Engineering)

May anniversaries
(5, 10, 15 & 20+ years)

Adam Walters - 33 years
Marc Mengel - 22 years
Eric Neilsen - 22 years
Roger Slisz - 22 years
Matt Crawford - 21 years
Glenn Cooper - 15 years
Ryan Rivera - 10 years
Anna Olivarez - 5 years

Common cordesy
electrical cord

During walkthroughs, it is common to find electrical cord “faux pas” in offices, construction areas and data centers. Proper use is simple and can prevent fires or shock.

Use electrical cords properly. Plug them directly into the wall or into an outlet strip. Never chain cords together. If your cord is not long enough, you can purchase one from the stock catalog. Search under “outlet strip.”

Protect the cords that you have. Never run them under a rug or carpet or through a slightly open window or door. Make sure that they are not draped over a sharp edge.

Periodically inspect your cords . Look for wear or breaks on the cord’s insulation. Check the prongs to see if they are loose or damaged. If damage is present, take the cord out of service and replace it. The Prep Counter in FCC 1E will take damaged cords or power strips, and dispose of them properly.

~Amy Pavnica

Perspectives on NLIT
Santa Fe, NM by Mark Kaletka Photo: Mark Kaletka

Several members of the Computing Sector attended this year's National Laboratories Information Technology Summit in Santa Fe, New Mexico, last month. The summit brings together IT professionals from all of the Department of Energy laboratories.

We asked the attendees for their insights and observations about this year's NLIT Summit. Here are some of their responses:

Jerry Guglielmo (CCD, OCIO/Service Managemen/Process Managers):

There was a talk on Lean IT, which essentially is about doing more with less by increasing efficiency and not by decreasing staff. Standardizing work practices and focusing on what is needed to provide value to the customer are some of the key concepts. While the terms used may be different, the concepts are similar to human performance improvement (HPI), Information Technology Infrastructure Library (ITIL) and I suspect other frameworks and standards that are based on the notion of best practices. People have been delivering services since well before Information Technology as we know it existed, and while the specifics may be evolving, there are general concepts that remain relevant.

Understanding customer satisfaction, like many other things, can be a complex challenge. Distilling satisfaction down to just one number, or even a set of high-level numbers, can lead to inaccurate conclusions. Additionally, there are several challenges to combining information on multiple questions into one rating. Quantities measured may be dissimilar, and combining them may not be meaningful. Even if they can be combined, how does one determine the appropriate weighting for each quantity? For example, the rating for the appropriateness of a solution may go up, while the time to resolve it goes down, so overall the rating stays the same. Is this good, bad or indifferent? The concerns here can be generalized to any dashboard, and the challenge is to think carefully when combining numbers about whether the combination makes sense. 

Phil Demar (CCD/Network and Virtual ServicesWide Area Networking and Network Research):

The NLIT networking track was new this year and seemed to generate modest interest. Approximately half of these talks were provided by Los Alamos National Laboratory, the host lab. Fermilab presented two talks, one on our wireless futures and the other on our IPv6 deployment plans. Andy Rader drafted the wireless talk, which was presented by Anna Olivarez and me, while I drafted and gave the IPv6 talk. In general, computing mobility support seemed to be the area of highest interest across the various NLIT technology tracks, with wireless support and software-defined networks receiving the most attention within the networking track.

If there was one overriding impression I took away from NLIT, it would be that we are very fortunate to have a relatively open network environment here. Many labs, particularly on the National Nuclear Security Administration side, are highly constrained in deploying wireless services for mobility support. While we are able to look forward to meeting the wireless technology challenges of the future, many of those other labs have to focus on working around their policy barriers of today.

Keith Chadwick (SCD/Scientific Computing Facilities/Grid and Cloud Computing):

I presented a talk, "Virtualization and Cloud Computing in Support of Science at Fermilab" and attended many interesting talks on a variety of topics including:

- Virtual Network Core Infrastructure to Meet Evolving Mission
- Network virtualization: the SDN you really want
- Science Applications in the Cloud
- Drive-thru Windows (and Linux): Would you like fries with that server?
- Defeat the Tyranny of Averages: Find Out How Satisfied Customers *Really* Are
- Can we Identify Spear Phishing Targets before the Email is Sent?
- Software Defined Networks & Openflow - What do they mean for my lab?
- E-Discovery: People, Processes, and Tools for Success
- Unclassified Private Cloud Services at LANL
- Agile Methods as a Risk Mitigation Strategy
- and some of the Fermilab presentations
- Special presentation on Wen Ho Lee.

In addition, I had discussions with the various vendors present at NLIT including: Hitachi Data Systems (HDS), Dell, Splunk and CDWG. The discussion with HDS was especially productive--based on recommendations from their representative, I was able to suggest alternative BlueArc storage configuration options for the IF/GCC BlueArc storage purchase that resulted in a savings of approximately $50,000.

Tammy Whited (OCIO/Service Management)

As a first time attendee of the NLIT Summit, I was impressed with the tracks that were available for the laboratory community to exchange information.  I attended a few talks on the IT Governance Track and learned a lot about what other labs are doing.  We are advanced in many areas so we were able to provide helpful information to the other labs.  There were opportunities to get together one on one or in small groups to dive deeper into issues you may be facing and to learn how they are addressing those issues.  I really found that to be a valuable part of the summit.

View NLIT 2013 Highlights presentation by Mark Kaletka as well as all presentations given at the summit by Computing Sector members.

Core Computing Division spotlight
Jeff Blaha

Jeff Blaha
Network and Virtual Services/Network Services

My role within the Network Services team revolves around design, installation, troubleshooting and day-to-day support of both the campus network and the scientific networks. The campus networking effort involves ensuring consistent wired and wireless connectivity for the majority of end users at Wilson Hall, FCC and the village and the scientific networking work includes maintaining the network infrastructures within the data centers at FCC, GCC and LCC .

My recent projects have included the on-site support of the power upgrades in the eighth-floor Fiber Central room at Wilson Hall and the second-floor data center at FCC. We also successfully completed a consolidation of the network switches at GCC. All of these projects will support newer and expanded projects across the Fermilab complex for the foreseeable future.

Currently, I am working on a project to design, configure, install and implement a new redundant data center distribution switch at FCC. The new switch will keep data moving for all users with minimal intervention or downtime. I am also helping with the testing of a new wireless provider, which will enable us to provide a more robust and stable wireless network infrastructure within all the facilities at Fermilab.

Scientific Computing Division spotlight
Mike Kirby

Mike Kirby
Scientific Programs/Running Experiments

As part of the Running Experiments (REX) group in SCD, I have served as the liaison for a number of experiments in the Intensity Frontier and Cosmic Frontier. As such, I help to ensure that experiments are able to fully utilize the available computing resources and products within SCD in order to meet their analysis requirements. As an extension of this liaison work, I am currently leading the FabrIc for Frontier Experiments (FIFE) project, which will provide collaborative scientific-data processing solutions for experiments across all frontiers at Fermilab. The FIFE project is drawing from the expertise and tools across SCD to provide fully integrated offline-computing solutions to experiments in the era of big data and distributed computing.

I am currently a member of MicroBooNE and the Long Baseline Neutrino Experiment, which are both liquid argon time projection chamber (LAr TPC) experiments that will use large detectors filled with liquefied argon to study the properties of neutrino beams generated at Fermilab. Measurements will include the rate of interaction of neutrinos on liquid argon and neutrino oscillations parameters that could potentially explain why the universe consists of matter instead of anti-matter.

For both experiments, I am working to develop automated event-reconstruction algorithms using special software used for these experiments in an attempt to identify individual electrons, muons, protons and photons in the detector. The automation of event reconstruction will be a major step forward for LAr TPCs. In addition, I have recently begun exploring the implementation of graphical processing units to identify charged particle trajectories in the MicroBooNE detector, improving the speed of reconstruction by a factor of ten or more.

In my spare time, I am an avid cyclist and frequently participate in racing at the Northbrook Velodrome, regional criteriums, or amateur events in Belgium. When I'm not training or racing on my bike, I can be found dancing two-step or swing at small music venues around Chicago.

Tips of the month

Service Desk: How to view a user's ServiceNow footprint

Service providers, in ServiceNow you can view information about another user (or yourself) such as which ServiceNow groups they are members of, what open tickets they have and which configuration items (CIs) are associated with them, among other information.

For detailed instructions on how to access this information:

1. If you are not yet logged in to ServiceNow, log in.
2. Then open this knowledge article.


SharePoint: Quick guide for those with "design" permission

A new quick guide is available for those with SharePoint "design" permission or higher. This is a condensed version of instructions found in the existing SharePoint Designer Training manual. Topics covered in the quick guide include: creating and editing lists, document libraries, Web Parts, Views and site pages;  managing navigation, enabling versioning and requiring the check-out of files for a library, and adding columns to a list.

[View the Designer Quick Guide