Coming Soon: New Password Reset Tool
The Authentication Services group will be releasing a new and improved password reset tool. Some of the new improvements include:
- Alternate email option for Services accounts,
- More streamlined enrollment process,
- The ability to view the answers to the enrollment security questions.
- Also provide password maintenance for the FermiTest domain.
The next Open Source & Technology Enthusiasts group typically holds monthly meetings discussing new technology readily available to the public. The next meeting will be on August 14. Meeting announcements and detailed notes of the monthly Open Source Technology Enthusiasts Group are sent to the firstname.lastname@example.org mailing list.
From Softpedia: Fermilab releases Scientific Linux 7.0 Alpha 2
Upcoming SharePoint training - August 15
SharePoint for Contributers (end-users) 10 a.m. - 11:30 a.m.
This 90 minute course will lead attendees through the basic concepts and commands relevant for end-users and contributors. It will include information related to working with a SharePoint team site including site navigation, document libraries, lists, web parts and web pages. More information/enroll.
SharePoint Designer Training - 1 p.m. - 2:30 p.m.
Basic concepts for designers including creating lists, document libraries and creating and editing pages in SharePoint 2013. More information/enroll.
SharePoint Site Owner introductory training - 3 p.m. - 4:30 p.m.
Basic concepts for site owners including managing site permissions, updating site navigation, creating and maintaining sub-sites and restoring deleted site content in SharePoint 2013. More information/enroll.
What is the CMS data challenge?
The Large Hadron Collider will begin its second run of particle collisions in 2015. Before then, Oliver Gutsche and the SCD CMS Computing Support group will participate in the Computing, Software and Analysis Challenge (CSA14). Taking place this summer, the challenge tests improvements to the world-wide distributed computing infrastructure of CMS to process and store the collected data from the detector and simulated proton-proton collisions.
Fermilab is a major provider of storage, production and analysis CPU resources for CMS. The LHC run starting in 2015 will have a beam energy of 13 teraelectronvolts (TeV)—nearly double that of the first. A higher beam energy means there is an increase in the trigger rate and the rate that data is being collected. A higher trigger rate directly affects the amount of data being collected. Accordingly, Fermilab and the rest of the CMS grid infrastructure must be able to handle that additional data without a hitch.
“More computing resources are needed for LHC Run 2. We doing a lot of optimizing and improving with resources that we have to keep the computing requirements in our bounds,” says Gutsche. “There are many aspects that have to be tested.”
Eric Vaandering, a member of the CMS Program Support group, along with a team of physicists at the LHC Physics Center at Fermilab is beta-testing version 3 of the CMS remote analysis builder (CRAB3), the submission tool that allows CMS collaborators to submit their analyses on the grid. CRAB3 aims to improve the reliability with which data is analyzed and returned to the users. This update to CRAB relies heavily on GlideinWMS from Open Science Grid and asynchronously transfers output files to users, separating the success of processing and transfers for higher reliability. CRAB3 is also able to resubmit failed jobs that are likely to succeed on second attempt. Ultimately, the new version of CRAB is designed to be more user friendly and more efficient in using resources.
For CSA14, the goal is to test CRAB3 at capacities beyond those anticipated during LHC Run 2. A second test that will take place is going to check the capabilities of a new data format, mini analysis object data. Mini AOD helps physicists do their analyses more quickly and efficiently than its predecessor, AOD, as it reduces the amount of information stored per event.
“Mini AOD will greatly reduce the event content to 30KB per event, down from 300KB in the regular AOD,” said Gutsche. “In the end, we expect 80 percent of the analyses will use mini AOD. The challenge is successful when users tell us mini AOD works.”
The time between the data challenge and the start of LHC Run 2 will be used to fine-tune any issues found during the challenge. The results will be reported this fall, just in time for the LHC to emerge from its hibernation in spring 2015.
~ Byron Mcguire
Welcome, new employees!
Jeffery Spidle (CCD/Network and Communication Services/Network Services)
Christopher Sheppard (CCD/Service Operations Support/Desktop Engineering)
Krista (SCD/Scientific Computing Facilities/CMS Computing Facilities) and Scott Majewski welcomed a baby boy, Maxton “Max” Michael, on July 4 at 11:30 p.m. Max weighed 7.5 pounds and was 21 inches long when he was born and conveniently helped his parents avoid long lines leaving a fireworks display.
(5, 10, 15, and 20+ years)
Shirley Jones - 41 years
Sheila Cisko - 35 years
Donald Flynn - 30 years
Ken Fidler - 26 years
Laura Mengel - 23 years
SharePoint Tip of the Month: Granting all authenticated users access to a FermiPoint (SharePoint 2013) site:
Site owners can grant all authenticated users (anyone at Fermilab with an active Services account) access to their FermiPoint site by adding the “Active Directory” group (Role) DomainUsers to a perm issions group. For example, if an owner adds the “(Role) DomainUsers” group to the “Visitors” group, all members of the “(Role) DomainUsers” group would then have read-only access to their FermiPoint site.
For more information about managing site permissions, please see the Site Owner user manual.
From the CIO: Dog days of summer
The calendar may say it’s summer, but it sure doesn’t feel like it. The pace has been fast and furious both in the Computing Sector (am I still allowed to call us a sector?) and in many other parts of the lab.
On June 30, we rolled out FermiWorks, a joint effort between computing and human resources. The goal of this project is to modernize human resources by providing a state-of-the-art tool that will not only automate many business functions but also give managers capabilities and information not previously available. This product is truly a paradigm shift for how the lab conducts its business.
On the day we went live, I was in San Francisco attending a meeting with all of the CIOs of the DOE national science labs, my first meeting with this group. I was an instant celebrity when I announced during introductions that Fermilab rolled out Workday, the FermiWorks commercial product. Most of the other labs are interested in Workday, but we are in the lead in terms of its development and deployment. I was barraged with questions at the first coffee break and more!
To my knowledge, this is the most complicated systems integration effort we have ever attempted. I am so proud of our team who got this done. The effort, dedication, and professionalism of those in the OCIO, CCD, and SCD who worked on this project was and continues to be inspiring. I would like to thank each and every one of you. I would also like to thank those whose roles were expanded to free up others to work on this. This was truly a team effort.
It will take us a few more months to stabilize things, and there is already a list of planned enhancements that will continue for quite a while. But the capabilities that FermiWorks will provide are truly a game changer for the lab.
Though there’s always more work to do, I hope that we can all enjoy a piece of summer before the first frost arrives.
The now decommissioned Jpsi cluster in the Grid Computing Center.
This month, the Cosmology computing cluster received a boost of power, thanks, in part, to some older hardware. The cluster received 120 of the total 856 machines in the decommissioned Jpsi cluster. Sharing its name with a subatomic particle, the Jpsi cluster was previously housed in the Grid Computing Center. Now the machines reside in the Cosmology cluster in the Lattice Computing Center.
Scientists use these machines to run various simulations and crunch data from experiments throughout the lab. In addition to the increase in computing power, machines in the Cosmology cluster received a boost in memory. Each computer now has 16GB of RAM, doubling the previous capacity. The hardware upgrade adds five teraflops of computing power to the Cosmology cluster—about 14 percent of the total 37 Teraflops of computing power in the decommissioned Jpsi cluster. The number of Teraflops is important when running massive simulations such as cosmic ray shower simulations, which require more power to run.
Not only were several computers re-used, but plenty of other hardware was as well.
The new machines in the Cosmology cluster, located in the Lattice Computing Center
“All of the regular cables, the networking cables, the racks and the machines are moved over,” said Amitoj Singh, deputy department head of the High Performance Parallel Computing Facilities group. “The only things we purchased were the power distribution units. It’s like a lemon. We are trying to squeeze out the last few drops. We are recycling and making good use of the hardware before it goes bad.” The remaining 736 machines from the Jpsi cluster, along with the 120 machines that are being replaced in the Cosmology cluster, will be moved.
“A lot of the remaining Jpsi cluster machines still have life in them. If somebody has a use for them, they will be in the Site 38 warehouse,” said Singh. It is important to note that any equipment that has reached its vendor end of life will not be re-used.
The upgrade will give the computers within the Cosmology cluster increased power to solve familiar problems for a few more years to come.
~ Byron Mcguire
CCD/Cyber Security Services/Authentication Services
I joined the Authentication Services group in February 2013. I love to learn, so I was eager to dive into the many projects the group was working on. The main purpose of our group is to provide Fermilab users robust, secure and convenient methods for accessing various lab-wide services such as workstation login, email, web services, ServiceNow, SharePoint and many others. For example, this year we set up an Eduroam identity provider service, which allows users with a Fermilab account to access wireless networks at participating locations around the world. We are also working on a Shibboleth-based federation service, which will provide single sign-on opportunities with other participating institutions.
Recently, we finished implementing an RSA SecurId two-factor authentication service that increases security by requiring a code from the RSA token (a device, something you have) in addition to the password (something you know) to login to the server or network appliance.
Currently, I am helping to implement the Dell One Identity Manager solution. I am very excited about the possibilities this software offers. Our first goal is to automate the on-off boarding of the users including provisioning and de-provisioning accounts and other resources in central services. The Identity Manager solution will need to communicate with many systems and applications around the lab and will be integrated with FermiWorks for its single source of truth. This implementation will require significant teamwork among many groups within CCD and the rest of the lab. In the end, our goal is to serve the entire lab, both the business and scientific community and make their work easier.
Additionally, I have also been working on becoming a Kerberos expert. I am pleased to help users debug various problems and also come up with the most secure, convenient plan to use Kerberos for their needs. I am preparing to upgrade Fermilab's Kerberos hardware and software infrastructure to modernize the platform and hardware.
OCIO/Service Management/Process Managers,
OCIO/Service Management//Business Analysts
I started working at Fermilab in November 1997 when my old department was part of the old Business Services Section and started my Fermilab career as the manager of Production Applications. Several reorganizations and almost 17 years later, I am now working in the Service Management area of the Office of the CIO and wear several hats.
Most of the time I am found wearing my change and release manager hat. I was part of the team that helped rollout the service management process. Service management ensures that the quality of existing and new services is sustained in a holistic and scalable way. The service management team oversees the implementation of ITIL and the ISO20K standard to measure and improve service quality and delivery in-line with the ISO20K international standard. Change and release management is part of service management, and I spend much of my time making sure that changes and releases introduced into our production environments are up to speed and any risks are mitigated (e.g. Is there an installation and backout plan? Was testing successful? Have we communicated the change to those that may be impacted? etc.) In the near future, I will be onboarding additional groups from both CCD and SCD to change/release as we continue to onboard additional services to ISO20K.
I also wear a service management liaison for Finance hat. Here, I help ensure that the Finance Section employees are taken care of with regards to the computing services they receive. I like to think this position is very similar to a hotel concierge. I try to make sure my “guests” have a trouble-free experience with their computing services.
The last hat I wear is that of a business analyst. In that role, I help the lab implement technology solutions by helping to create project charters, determining requirements, communicating them clearly to all stakeholders and using requirements to drive the design of test cases.