Audience: IT Support Staff
When used: The database backup had failed for the table --- httpfPageResult. Due to the database data block corruption, application was not able to do select. Thus the daily data dump was also failing for the table. The only recovery process is to get the data from all the monitoring nodes and then pump the data into the database.
1. Get the list of serviceIds that should have data for httpf service. Issue the following SQL statement in the production database:
select 'grep "^35|"' || s.serviceid || '|" viewPoint/var/$i >> abcd' for servicelist s, subservice b where s.servicetype =8 and s.serviceid = b.serviceid and b.expiredate > sysdate;
2. Create a script to extract the data from each monitoring nodes:
#!/bin/sh -x for i in data.<YYMMDD> data.<YYMMDD> data.<YYMMDD> data.<YYMMDD> data.<YYMMDD> do ***** copy the output from the previous sql select statement *** done
3. Execute the script on each monitoring node at the location ~vwpoint. The output file "abcd" will contain all the httpfPageResult data set.
4. Compress the output file "abcd" and transfer to the server "<your account>@binary4".
NOTE: Do not transfer to the account vwpoint. It will mess things up.
5. In <your account>@binary4 created the directories ~<your home dir>/viewPoint, ~<your home dir>/viewPoint/bin, and ~<your home dir>/viewPoint/var.
6. Move the data extract file (abcd) to the location ~sbehera/viewPoint/var and rename it as "data.<YYMMDD>".
7. Copy the program ~vwpoint/viewPoint/bin/sndMonRes to ~<your home dir>/viewPoint/bin.
8. Set the environment variable ==> VIEWPOINT=~s<your home dir>/viewpoint
9. Execute the program ~<your home dir>/viewPoint/bin/sndMonRes -i 126.96.36.199.
10. This will transfer the data to the main production system.