Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview

The Ampere Pod consists of 3 Ampere HR330A servers with the following specifications per server:


Ampere HR330A
CPU1x 32 core 3.3 GHz Ampere ARMv8 64-bit Processor
RAM8x 16GB DDR4-DIMM
Storage1x 480GB SSD
Networking

2x 25Gbe SFP+ (connected)

1x 1Gbe BASE-T (not connected)

1x IPMI / Lights-out Management

Usage & Users

Blueprint(s) / Description(s):  Connected Vehicle Blueprint

Primary Contact:  Secondary Contact:Robert.Qiu(robertqiu@tencent.com) 

Server Access

For the meantime when you request access to the Akraino Pods at UNH-IOL we will ask that you send us your public ssh key so we can add it to the root users of those machines.

If you wish to have your own user we can get that setup, but it is much quicker and easier if you just stick with the root user

IPMI Access

Once you have successfully connected to the VPN there is a file located in each machine in /opt called ipmi_info.txt, this file has the username and password for the IPMI interfaces.

...

If you have any issues with any of the IPMI interfaces please email akraino-lab@iol.unh.edu and we will assist you in anyway we can.

Networking

NetworkIP NetworkVLAN IDDescription
IPMI / Management

10.11.5.0/24

GW: 10.11.5.1

3005Connections to all IPMI interfaces on each server and the switch management interface
Public

10.11.6.0/24

GW: 10.11.6.1

3006Public network (able to reach the Internet), available to each server.

Servers and Switches

Server NameIPMI AddressIPMI VLAN IDPublic Network AddressPublic Network VLAN IDSwitch Port(s)OS Installed
ampere110.11.5.11300510.11.6.11, 10.11.6.123006

Cisco TOR: Left 25Gbe Port 41, Right 25Gbe Port 42

CentOS 7.6
ampere210.11.5.12300510.11.6.13, 10.11.6.143006Cisco TOR: Left 25Gbe Port 43, Right 25Gbe Port 44CentOS 7.6
ampere310.11.5.13300510.11.6.15, 10.11.6.163006Cisco TOR: Left 25Gbe Port 45, Right 25Gbe Port: Not ConnectedCentOS 7.6


Usage

This POD is used for connected vehicle blueprint. 

Tarsframwork is deployed in Ampere Server 1.

TarsNode is deployed in Ampere Server 2 and Ampere Server 3.      


To make the CI/CD work, we set up Jenkins master in Ampere Server 1,  Jankins slave Ampere Server 2 and 3. 

Beyond that, we create a folder in CI/CD gerrit repo( path: ci/ci-management/jjb/connected-vehicle) and write a YAML file. Let Jenkins control the tars framework via the Yaml file.

In the end, we upload the CI/CD log to Akraino Community. In this way to make sure connected vehicle pass R2 release review.