Science Computing Standards of Service

Submitted to PBSci RITAC
Approved by Dean Koch, 11/29/2021


Hours of Support:

Science Computing will respond to most requests within 24 hours, excluding weekends and holidays. The typical work schedule is M-F 8am-5pm. The team does not provide 24/7 support, though certain non-standard or emergency situations may warrant after hours efforts. Non-standard or emergency responses will be handled on an ad hoc basis and with the consultation of the Science Computing Director. 


Response time goals:

Description Goal response time (all timelines exclude holidays & weekends)
Standard inquiry


Initial meaningful outreach within 24 hrs


Software installations


Initial investigation within 3 working days; final install within two working weeks


Hardware & OS installations of standard equipment


Within 5 working days upon arrival of equipment & identification and preparation of space


Hardware and OS installation of complex equipment (ex: clusters) Within 3 months or in an agreed upon timeframe with faculty stakeholders

Adjustments to timelines will be proactively communicated by the ITS team and mitigated whenever possible.

Standard Support:

Affiliations eligible for standard support:

  • Active research faculty
  • Emeritus faculty with active research grants
  • Research Scientists
  • Graduate students
  • Postdoctoral fellow
  • Divisional Staff
  • All Observatory-affiliated staff and research scientists

Equipment eligible for standard support:

  • Managed desktop and laptop computers purchased with University funds 
  • Managed faculty servers/clusters purchased with University funds
  • Managed instrumentation computers purchased with University funds
  • Limited printer support (phasing out)
  • Standard operating systems (current versions of Windows 10, Windows Server, LTS Ubuntu, Rocky 8, TrueNAS)

Affiliations ineligible for standard support:

  • Emeritus faculty without active research grants
  • Lecturers
  • Teaching Professors

Equipment ineligible for standard support:

  • Personally owned computers, servers, or other devices
  • Systems purchased with non-University funds
  • Mobile devices (iPads, iPhones, Android, etc.) and Smart Televisions.
  • Systems not meeting Campus Minimum Connectivity Requirements
  • Dual boot computers
  • Computers with non-standard operating systems
  • Systems purchased without first consulting Science Computing staff
  • Conference Rooms and instruction spaces
  • Services provided by Campus ITS (see below)

Central ITS provides support for the following infrastructure and services. Please contact for help with any of these issues:

  • Wireless and building-specific networks
  • Website development and hosting
  • Services provided by central-campus ITS (examples include Email/Gmail, Calendar, Eduroam, Duo MFA, Canvas, SETS, Zoom, Lecture capture)
  • Classrooms and instructional spaces
  • Emeriti faculty without active research programs requiring minimal ITS support like help with email or affiliate accounts


Getting Help:

Submit help requests to Problem reporting may occur verbally or by other informal communication; however, informal requests should not replace submitting an electronic service request. End users should not email Science Computing staff directly except in circumstances requested by Science Computing staff.

Buying computer hardware and associated equipment:

Computers and servers should be purchased in consultation with Science Computing. Purchases without consultation may not be supportable due to hardware incompatible with UCSC infrastructure.

Host Management Options for Faculty and Researchers:

PBSci faculty have the option to have a portion or all of their workstations, servers or research systems Managed or Unmanaged by Science Computing staff:

  • Managed Machines

Managed machines are completely maintained by Science Computing staff. Science Computing staff are responsible for all system maintenance, including patching, software and hardware installation. Users generally do not have administrative (root or sudo) access to these systems. Costs for repairs and replacement parts, if necessary, are borne by the system owner.

To be eligible for support, systems must be purchased with University funds and in consultation with Science Computing staff to meet Science Computing and campus data center requirements. Systems must also conform to campus Minimum Connectivity Requirements.

  • Unmanaged Machines

Unmanaged systems are completely maintained by the end user, and requires written acknowledgement of the following policies by the end user. Science Computing is not responsible for performing systems maintenance, patching, installation or upgrades.  End users are expected to install any and all security updates, provide for full disk encryption, and use system firewalls as described by University policy. Unmanaged systems are not eligible to connect to divisionally-provided NFS file shares. Systems purchased without consulting Science Computing fall into this category.

Unmanaged systems identified by Information Security as possessing critical vulnerabilities may be disconnected from the campus network within 7 days after Science Computing staff notify the owner via email of the vulnerability if such vulnerabilities have not been remedied. Unmanaged systems showing signs of active exploitation will be summarily disconnected from the campus network without prior notice to protect divisional network or resources.

Science Computing will offer limited, general advice to users of Unmanaged systems. In general, Science Computing will not intervene on the user’s behalf for Unmanaged systems. 

  • Specialized Support:

Occasionally, novel research-computing equipment does not fit within the Science Computing support model (e.g., exceptionally old systems, out of date operating systems, systems that cannot be placed on the regular campus network). In these cases, a faculty member can negotiate for custom support. Requests for custom support should be submitted to the RITAC. Custom support falls outside the Science Computing support mode and is recharged to the requesting faculty member at the recharge rate listed above. The RITAC negotiates with the requesting faculty member, on a case-by-case basis, to determine the recharge budget for the custom support.

Filesystem Snapshots and Backups:

Science Computing can provide short term data backups of research data on Science Computing-managed servers.  Administrative workstations and laptops are not backed up, and users are encouraged to keep copies of critical work data on a campus-provided shared platform, such as Google Drive.

Backups occur on an “every other day” interval. Backups are retained for 30 days (one month.) Recovery of data beyond this point cannot be guaranteed. Faculty and researchers are expected to curate their data and move such data to an archival solution afterwards. Science Computing can provide guidance on archival solutions.  In the case of exceptionally large data sets or custom backups, faculty or research groups will be expected to provide a backup target or server for their data.

Data Center Space Allocation:

Science Computing manages systems in divisionally-owned campus data center spaces on behalf of the faculty and division. Physical space on campus is limited. Access is provided on a first-come, first-served system, subject to space, technical, and facilities restrictions. Preference is given to those who do not already have a space allocation.  Faculty will not have direct access to systems hosted in divisionally-owned data centers.  Allocations are made at the sole discretion of the Science Computing Director.

If the faculty or PI have technical requirements not presently provided in a divisionally-owned, campus data center, they will be expected to fund any necessary upgrades to provide that capability.

Due to limited data center space on campus, systems residing in divisionally-owned data centers are expected to be retired and removed at the end of their functional life cycle (typically after 5 years or after manufacturer warranty expires). Science Computing will work with the system owner on removal, data migration, and potential replacement.  Remote colocation options are available.  Please contact Science Computing for consultation and more information.

Standard system requirements for faculty or PI’s requesting colocation include, but are not limited to:

  • Rack mountable in standard server cabinets
  • Dual power supplies
  • IPMI or remote management interface
  • 10Gb networking
  • Mirrored OS boot device