Table of Contents
Edit
list of Traps for auto-discovery. 7
Compress
the tar and file directory. 12
Moving/Removing Files. 13
DRAFT
These instructions are not concise so be aware
that there may be errors in the instructions or missing commands
These
instructions are used to apply a PPM Update which is a standard process for
upgrading application in Linux OS
The com.zip
patches that Manage Engine use are likely to be specific to the OpManager
application and might not be valid for any other applications.
Depending on the access levels and
permissions, copy the PPM and cpom.zip files to the /home directory of your
user or root.
Using WinSCP works well if TFTP or SCP is
not setup on the laptop
https://www.manageengine.com/network-monitoring/help/read-me.html
Confirm that the change control has been
accepted and is ok to go ahead. This process will stop the Probe and Central
servers and monitoring will cease.
To copy the installation directory as a backup prior to upgrading, use the following commands. Stop the services first.
sudo cp -r OpManagerProbe/ 20190507_OpManagerProbeBackup
mv SOURCE DESTINATION
mv /home/com.zip /opt/ManageEngine/OpManagerProbe/lib/fix
Start the Probe services once the backup is done to allow Auto-Upgrades to work.
Delete /com and any subdirectories if these exist.
rm -Rf com
NOTE: Once updated to 123194, there is a fix for Auto
upgrade. The new process is to stop the OpManager Service on all Probes and the
Central Server, then apply Service pack on Central.
Once
Central is upgraded, start all Probes, so that upgrades will be automatically
pushed to Probes
/opt/ManageEngine/OpManagerCentral/bin
sudo su/etc/init.d/OpManagerServer stop
ps -ef |grep post
killall postgres
cd /opt/ManageEngine/OpManagerCentral/bin
./UpdateManager.sh -c
./StartOpManagerServer.batOR/etc/init.d/OpManagerServer start
tail -f ../logs/wrapper.log
If any
issue occurs then please send us the logs directory for analysis.
We use SNMP to fetch the GPS location. The OIDs for the
device types are mentioned in coordinates.xml file located under <<product-home>>\conf\OpManager.
<Coordinates
deviceTypeName="MIMOMAX-BRU-T">
<latitude>.1.3.6.1.4.1.31730.8.1.35</latitude>
<longitude>.1.3.6.1.4.1.31730.8.1.36</longitude>
</Coordinates>
<Coordinates
deviceTypeName="MIMOMAX-RRU-T">
<latitude>.1.3.6.1.4.1.31730.7.1.35</latitude>
<longitude>.1.3.6.1.4.1.31730.7.1.36</longitude>
</Coordinates>
<Coordinates
deviceTypeName="NDLT">
<latitude>.1.3.6.1.4.1.31730.9.1.35</latitude>
<longitude>.1.3.6.1.4.1.31730.9.1.36</longitude>
</Coordinates>
<Coordinates
deviceTypeName="MIMOMAX-RRU-P">
<latitude>.1.3.6.1.4.1.31730.10.1.32</latitude>
<longitude>.1.3.6.1.4.1.31730.10.1.33</longitude>
</Coordinates>
<Coordinates
deviceTypeName="MIMOMAX-BRU-P">
<latitude>.1.3.6.1.4.1.31730.11.1.32</latitude>
<longitude>.1.3.6.1.4.1.31730.11.1.33</longitude>
</Coordinates>
Once after
applying the PPM follow the below steps:
If you face any issues then please enable the debug prints
and reproduce the issue and send us the logs for analysis.
Go to cd /opt/ManageEngine/OpManagerProbe/conf/OpManager
GNU nano 2.2.6 File:
discovery.properties
#$Id$
#When
ping is true, discovery is performed using ICMP when set as false discovery is
performed using nmap.
PING=false
#This
timeout value is in seconds. This is timeout value for discovery. If discovery
performs more than specified t$
TIMEOUT=300
#The
below is to specify the nmap options or parameters which are required for
discovery. Any options like -R or --$
NMAPSTATPARAMS=-sU
#The
below is to specify required options to find the os of the devices. By default
it is executed as 'nmap --host_$
NMAPTYPEPARAMS=--host_timeout
15000 -sS
#Snmp
time out value in seconds.
SNMPTIMEOUT=60
#Enable/Disable
the dns lookup during discovery, set TRUE/FALSE.
DNS_LOOKUP=TRUE
#The
below is to specify the bulk monitors addition timeout in seconds, default
timeout is 300 sec.
BULKMONITORS_DUMP_TIMEOUT=300
#Option
to set the snmp displayname as unique in discovery by using the
MATCH_DISPLAYNAME property
MATCH_DISPLAYNAME=false
#The
below is to specify required options to find the services (specified in nmap-services file)
running in the de$
SERVICEPARAMS=-F
#Whether
the Display Name of Interface should be updated if any changes are there.
UPDATEINTFDISPLAYNAME=false
UNIQUE_ELEMENT_IPADDRESS=false
UNIQUE_SYSNAME=false
ADD_DEVICE_IP_ENDING_255=false
SNMPRETRIES=2
#Enable
multiple EngineID requests
RETRY_ENGINEID_REQUEST=trthreads.conf
nano
threads.conf
GNU nano 2.2.6 File:
threads.conf
#$Id$
# This
configuration file is to be able to control the number of threads
# in WebNMS
via the various schedulers . The syntax of the file is
# SchedulerName number of Threads.
# By
default WebNMS has 6 schedulers . one main one for general tasks.
# one for
status polls, one for data polls, one for discovery module, one for
#
configuration module and one for Management Server module.
# They are
identified by the names "main"
, "statuspoll" , "datapoll", "discovery"
#
"config" and "MS" respectively.
# Depending
on the requirements you can increase the number of threads for a
#
particular task.
main 6
statuspoll
1
datapoll 12
discovery
10
deepdisc 10
config 3
MS 1
Net_Disc 4
provisioning
4
DataCollection
10
WMI_EXEC 12
REALTIME 6
VI_WMI_EXEC
10
ProcessMonitoring_EXEC
6
HardwareMonitoring_EXEC
4
SAVE_POLL
60
HardwareMonitoringDisc_EXEC
2
CoordinatesLocation
4
#Unlimited
Edition Configurations
INTF_POLL
40
BULK_DATA_SAVER
40
NP_EMAIL 5
SNMP_SPOLL_SAVE
4
Storage_Poll
12
Storage_StatusPoll
5
Trap_Discovery 2
As part of the patch there is a conf file that need to have
the list of comma separated traps we send. The services will need to be stopped
before applying or restarted afterwards
Got to the following directory and open the configuration
file in an editor
cd /opt/ManageEngine/OpManagerProbe/conf/OpManager
nano
serverparameters.conf
Look for the following MMX Build items
#Autodiscovery of
devices through GPS coordinates
discoverDeviceThroughTrap
true
#Determines if the
device is managed or unmanaged after autodiscovery
trapDiscManagedState
true
#MiMOMax Build Changes
isMiMOMaxBuild true
isOfflineMapEnabled true
offlineMapURL http://172.21.117.119:8082
Add the following OID’s to get OpManager to trigger
auto-discovery on these specific traps
mimomaxTrapDiscoveryOIDs
.1.3.6.1.4.1.31730.7.0.4,.1.3.6.1.4.1.31730.8.0.4,.1.3.6.1.4.1.31730.9.0.4,.1.3.6.1.4.1.31730.10.0.4,.1.3.6.1.4.1.31730.11.0.4
Restart OpManger services
/etc/init.d/OpManagerServer
restart
To discover the data from a radio we need to add Software
and Serial numbers, the following lines into the xml. This can be used for any
device, we simply add the corresponding OID into the software and serial number
feilds
Please edit the HardwareInfo.xml file presented under
<<OpManager-Home>>confOpManager and add or
update the respective OIDs with instances.
Restart the OpManager service then rediscover/add the
device, the serial number and software version will get populated.
Regards,
Raamesh Keerthi.N.J
<?xml version="1.0"
encoding="utf-8"?>
<HardwareInformation>
<Hardware deviceTypeName="MIMOMAX-BRU-T">
<SoftwareVersion>.1.3.6.1.4.1.31730.8.1.14.0</SoftwareVersion>
<SerialNumber>.1.3.6.1.4.1.31730.8.1.20.0</SerialNumber>
</Hardware>
<Hardware
deviceTypeName="MIMOMAX-RRU-T">
<SoftwareVersion>.1.3.6.1.4.1.31730.7.1.14.0</SoftwareVersion>
<SerialNumber>.1.3.6.1.4.1.31730.7.1.20.0</SerialNumber>
</Hardware>
<Hardware
deviceTypeName="MIMOMAX-RRU-P">
<SoftwareVersion>.1.3.6.1.4.1.31730.10.1.11.0</SoftwareVersion>
<SerialNumber>.1.3.6.1.4.1.31730.10.1.17.0</SerialNumber>
</Hardware>
<Hardware
deviceTypeName="MIMOMAX-BRU-P">
<SoftwareVersion>.1.3.6.1.4.1.31730.11.1.11.0</SoftwareVersion>
<SerialNumber>.1.3.6.1.4.1.31730.11.1.17.0</SerialNumber>
</Hardware>
</HardwareInformation>
Please follow the below steps to
enable Offline Map & MiMOMax features:
isMiMOMaxBuild true
isOfflineMapEnabled true
offlineMapURL http://172.21.197.112:8080/
Note: offlineMapURL http://172.21.197.112:8080/ please provide your dedicated offline map server
IP
/etc/init.d/OpManagerServer start
Do a:
tail -f ../logs/wrapper.log
OUTPUT MAY LOOK SOMETHING LIKE THIS
INFO | jvm
1 | 2018/03/22 09:10:21 | MafService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:22 | StatusPropagationService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:22 | DiscoveryService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:22 | NCMSSHDService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:22 | ServerStartupNotify [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:23 | SysLogMonitoringService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:28 | NetFlowService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:28 | OpUtilsService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:28 | DataManagement [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:28 | LeaService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:34 | DService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:34 | FWASSHDService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:51 | WebService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:10:51 |
INFO | jvm
1 | 2018/03/22 09:10:51 | Server started in :: [64708 ms]
INFO | jvm
1 | 2018/03/22 09:10:51 |
INFO | jvm 1
| 2018/03/22 09:10:51 | Connect to: [ https://localhost:443 ]
After it shows the web server running
above it has completed
EXAMPLE OUTPUT MAY LOOK LIKE THIS
root@pobfanprobe1:/opt/ManageEngine/OpManagerProbe/bin#
tail -f ../logs/wrapper.log
INFO | jvm
1 | 2018/03/22 09:13:13 | DiscoveryService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:13:13 | ServerStartupNotify [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:13:14 | SysLogMonitoringService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:13:14 | NCMSSHDService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:13:16 | NetFlowService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:13:16 | OpUtilsService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:13:19 | DService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:13:19 | DataManagement [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:13:19 | LeaService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:13:19 | FWASSHDService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:13:29 | WebService [ STARTED ]
INFO | jvm
1 | 2018/03/22 09:13:30 |
INFO | jvm
1 | 2018/03/22 09:13:30 | Server started in :: [53239 ms]
INFO | jvm
1 | 2018/03/22 09:13:30 |
INFO | jvm 1
| 2018/03/22 09:13:30 | Connect to: [ https://localhost:443 ]
If there are any
patches to apply you can use the following process
Upload the com.zip file to the Probe server as this is only
required for that device.
Stop the Probe services
Stop the Opmanager services and kill
postgres processes
sudo su
/etc/init.d/OpManagerServer
stop
Check that postgres processes have stopped
and wait for them to do so.
ps -ef |grep post
Kill the processes if they don’t stop
killall postgres
Move the com.zip file to the installation
directory
If there is an old copy in the fix
directory it will need to be removed. Confirm deletion.
rm /opt/ManageEngine/OpManagerProbe/lib/fix/com.zip
rm: remove regular
file `com.zip'? y
To move to the installation directory use
the following commands.
mv SOURCE DESTINATION
mv /home/com.zip /opt/ManageEngine/OpManagerProbe/lib/fix
Delete /com and any subdirectories if these exist.
rm -Rf com
UnZip the com.zip file from /opt/ManageEngine/OpManagerProbe/lib/fix so that the
subdirectories are placed in the correct locations.
unzip com.zip
Manage Engine often ask for the logs to be refreshed or backed
up as part of diagnostics. This can be done while the server is stopped and
then moving the old logs directory. OpManager will create a new folder when the
application starts
Stop the Opmanager services and kill
postgres processes
sudo su
/etc/init.d/OpManagerServer
stop
Check that postgres processes have stopped
and wait for them to do so.
ps -ef |grep post
Kill the processes if they don’t stop
killall postgres
To move to the installation directory use
the following commands.
cd
/opt/ManageEngine/OpManagerProbe/
Navigate to the Probe directory
mv SOURCE DESTINATION
mv /opt/ManageEngine/OpManagerProbe/
If the logs need to be compressed and sent to Manage Engine
for analysis, then use the WinSCP to transfer and the next command to zip the
logs
Stopping the OpManager service is preferred, but not essential
sudo su
/etc/init.d/OpManagerServer
stop
Check that postgres processes have stopped
and wait for them to do so.
ps -ef |grep post
Kill the processes if they don’t stop
killall postgres
cd /opt/ManageEngine/OpManagerProbe/
zip -r FILENAME.zip DIRECTORY/*zip -r testfile.zip logs/*
sudo tar -cvf FILENAME.tar /opt/ManageEngine/OpManagerProbe/logs/*sudo gzip FILENAME.tar
/etc/init.d/OpManagerServer
restart
These can be viewed in the web browser
https://10.210.1.11/logs/stdout_0.txt
https://10.210.1.11/logs/opm/discoveryLogs_0.txt
https://10.210.1.11/logs/opm/nmserr_0.txt
https://10.210.1.11/logs/opm/opmanager_serverOut_0.txt
or login into the Linux box and view the files under
/opt/ManageEngine/OpManagerProbe/logs