Tuesday, May 26, 2015

D3 9.x on Centos 6 - Needed libraries

glibc-devel.i686
ncurses-devel.i686
libgcc.i686
gcc
pam-devel.i686
libuuid.i686
openssl098e.i686
hal-libs.i686
libidn.i686

Friday, May 22, 2015

What Version am I running...... Here's some answers!


What version of UniVerse am I running?
To get the version of UniVerse on your server for all recent releases of UniVerse, go to TCL in any User account and edit the "RELLEVEL" file in your VOC file:
>ED VOC RELLEVEL
5 lines long.
----: P
0001: X
0002: 10.1.11
0003: PICK
0004: PICK.FORMAT
0005: 10.1.11
Bottom at line 5.
----: EX
>
In this example, the release is 10.1.11.

What is my UniVerse Serial Number?To find the UniVerse Serial Number, enter the CONFIG command at TCL. The Serial Number is the same as the License Number listed on the first line.
>CONFIG
Configuration data for license number 123456789:
User limit = 10
In this example, the UniVerse Serial Number is 123456789

What version of UniData am I running?
To get the version of UniData on your server for all recent releases of UniData, execute "VERSION" from the TCL command prompt:

>VERSION
Unidata RDBMS......................3.3.2     Yes
Recoverable File System............1.1       No
Transaction Processing.............1.1       No
UniData OFS/NFA....................1.3       No
UniServer..........................1.3       Yes
UniDesktop.........................1.3       No
USAM Monitor Profile...............1.3       No
USAM Print.........................1.3       No
USAM Batch.........................1.3       No
USAM Journaling....................1.3       No
33265
The actual Version of UniData is 3.3.2.65 (last line).

OPERATING SYSTEMS

What version of HP-UX am I running? 
To get the version of UNIX on your server, go into the korn shell and enter the 'uname' command with an 'a' option.:
>uname -a
HP-UX  bmd350  B.10.20  D 9000/831  2011043966  64-user license
This example shows HP-UX running the 10.20 version.

What version of IBM AIX am I running? 
For those of you running the IBM AIX operating system, to find out what version of the Operating System you are running (including the Rocket Software UniData release), go to the UNIX shell, change directories as shown below and execute the cat command as follows:

# pwd
/usr/ud/bin
-r--r--r--   1 root     dw              173 Feb 28 2003  port.note
# cat port.note
Platform         : AIX 4.3.3
Operating System : AIX engine 3 4 000159494C00
Porting Date     : Feb 28 2003
UniData Release  : 6.0.3 60_030221_4161
Ported by        : srcman
# 

What version of Windows NT am I running?
To get the version of Windows on your server for all releases of NT4.0, execute the command "STATUS" at TCL within the UniVerse environment:
>STATUS
You are logged onto XYZ running Windows NT 4.0 (Build 1381 Service Pack 6)

Monday, February 23, 2015

5 Reasons CEOs Prefer MV Dashboard Over Spreadsheets

5 Reasons CEOs Prefer MV Dashboards Over Spreadsheets



5 Reasons CEOs Prefer Executive MV Dashboard Over Spreadsheets:

  1. Universal Platform: Visualize and combine data from your MultiValue Database.
  2. Automated Reporting: Automatically update key metrics to gain insights in real time. No more waiting for reports.
  3. Mobile Access: Spreadsheets are painful on your phone. With a dashboard, you can see your key metrics on any device.
  4. Improve Performance: Real-time dashboards increase transparency and accountability, which help motivate your workforce.
  5. Drill-Down Analysis: With the click of a mouse, executives can drill down into the exact data they want.


What Can an Executive MV Dashboard Do for Your Business?

If you’re like most CEOs, you’re working from multiple reports, trying to understand what’s happening in your business—and what to do about it.  While your brain may be sharp, it shouldn’t be the glue that holds this information together.  With MV Dashboard, you can see all the metrics that matter to you in a single, personalized CEO dashboard.

Because your data is often tucked away in many different corners of your MultiValue system, gathering and analyzing this information can be an inconvenient and time-consuming venture. You need a centralized, web-based portal that you can use daily to view critical stats about your business data and make real-time decisions. MultiValue Dashboard allows you to select and present your critical business data with intuitive, web-based graphical interfaces and widgets, giving you the tools you need to make rapid business decisions based on real-time data.

Monday, February 2, 2015

D3 Runtime-Errors File

Error Logging

This topic describes how to record compiling errors when using FlashBASIC. Errors encountered during runtime are logged to the DM,RUNTIME-ERRORS file. These errors can be displayed by using the TCL command LIST-RUNTIME-ERRORS
When compiling FlashBASIC programs with the o option, the compiler automatically logs all compilation errors if a data section called $log is present in the user's BASIC program file. The log is updated only when errors occur. Each log entry's ID is the same as the ID of the item being compiled.
The first attribute of the entry consists of the time and date that the error occurred as well as the phase of compilation where the error occurred. Other attributes can contain additional undefined information can contain UNIX error messages.
  • Errors logged as phase 0 errors are problems detected by the standard FlashBASIC compiler.
  • Errors logged as phase 1 and higher are FlashBASIC compilation errors.
  • Errors occurring higher can indicate an installation problem or the lack of a resource, such as swap space. In these cases, attributes 2 and higher provide more exact error reporting.
For UNIX: Not supported
For Windows: FlashBASIC runtime errors can be logged to the Windows event log. This feature is set from the FlashBASIC tab of the D3 Device Manager (see the D3 System Administration Guide for more information).
A Windows event log entry has the following format:
Runtime error <err> @ <progname>:<lineno>
where:
<err>
FlashBASIC error message item-ID.
<progname>
Item-ID of the FlashBASIC module.
<lineno>
Source line number.
Additional information can also be logged in the Data section of the Windows event log (accessible through the Event Detail dialog box in the Windows event viewer). For example:
Runtime error B12 @ myprog:15
B12 is a file has not been opened error, occurring in module myprog at line 15.
Example(s)
To enable logging for a file called bp, type:
create-file bp,$log 7
Compiling using the o option now logs errors into the bp,$log file. These can be displayed by typing any of these commands:
ct bp,$log
list-item bp,$log
sort-item bp,$log

Wednesday, November 19, 2014

Where is the System ID located in D3?


I always find myself trying to remember where in the world the system id is located in a D3 system. The issue is that I don't always need to know or remember but when I have to migrate or update/upgrade a server with a new system id, it's nice to know where it is located.   Currently in D3 systems, up to 9.x the system id is located in the dm,messages, config file/item.  So in order to edit it, login to the DM account and get to TCL.  Type "ED MESSAGES CONFIG" and you will see your current system id.  Make the necessary changes and make sure to "FI" (file it).

That's it for now.  Stay tuned for more regular tidbits regarding other MultiValue systems!

Tuesday, September 30, 2014

How to Resize a File in uniVerse


How to Resize a File in uniVerse

MEMO file used in this example

    Resizing is a function which can be performed by any user with access to an account. Since it's not something a normal user would stumble over without some searching and looking, this shouldn't be a problem. However, resizing is something which should be understood and performed with no one on the account, or at least a certainty that no one is accessing the file which is being resized. In newer releases of uniVerse, a resize can't be done if the file is open by any process wether background or user initiated. So, here's the steps to follow assuming the previous issues are known and controlled.

  1. ANALYZE.FILE MEMO

    (from TCL)
  2. This will return data which is significant in determining the new size. Here's a sample of what may come back:

    >ANALYZE.FILE MEMO
    File name                               = MEMO
    File type                               = 18
    Number of groups in file (modulo)       = 1009
    Separation                              = 1
    Number of records                       = 10125
    Number of physical bytes                = 46554112
    Number of data bytes                    = 37423452
    
    Average number of records per group     = 10.0347
    Average number of bytes per group       = 37089.6452
    Minimum number of records in a group    = 5
    Maximum number of records in a group    = 20
    
    Average number of bytes per record      = 3696.1434
    Minimum number of bytes in a record     = 20
    Maximum number of bytes in a record     = 427496
    
    Average number of fields per record     = 76.8433
    Minimum number of fields per record     = 1
    Maximum number of fields per record     = 7733
    
    Groups  25%    50%    75%   100%   125%   150%   175%   200% full
              0      0      0      0      0      0      0   1009 
    
    
    The above example gives us all the information we need to resize the file and make it usable and fast for now. The only missing information, which the client will have to provide and should know, is how much growth they expect over a specified period of time. That time period is measured by how soon they expect to resize the file the next time.

  3. Analyze the data returned by the ANALYZE.FILE command

  4. The data of interest is:

    1. File type: 18
      
      
    2. which is fine for this file. For files which are wholly numeric, other file types can be used, however 18 has proven to be very fast and stable. [Note: A discussion of using Dynamic Files (type 30) is not included in this document.]

    3. Modulo: 1009
      
      
    4. which will be used for reference to see how badly sized the file is.

    5. Separation: 1
      
      
    6. The separation should be based on the hardware/OS use of blocksizes at the disk level. On HP, this is usually 2048-bytes per block. IBM uses 1024-byte blocks. So, depending on your hardware, you need to adjust this accordingly. The block size for the separation is 512 bytes. A separation of 2 will equal blocksizes in uniVerse of 1024, 4 = 2048, 8 = 4096, etc. You can use any separation you wish, however, it is recommended that you stay within the recommended sizes based on hardware constraints.
      [Note: If you are on a Microsoft platform, use a separation of 4, which has proven to be very stable, as a base-line for resizing unless your record size mandates a larger separation.]

    7. Number of data bytes: 37423452
      
      
    8. This number, when expected growth is included, is the most important. It is used to calculate the new filesize.

    9. Average number of bytes per record: 3696.1434
      
      
    10. This number will be used to determine if a separation larger than the current separation is needed.

      [Note: On Microsoft platforms, this can still be modified if desired, but be sure to use a multiple of 2. I.e.: 2, 4, 8, 16, 32. values larger than 16 become difficult to maintain and are very inefficient where disk usage is a concern.]

    11. Minimum number of bytes in a record: 20
    12. This number, in conjunction with the previous number, will be used to determine the optimum separation.

    13. Maximum number of bytes in a record: 427496
      
      
    14. With the previous 2, this is used to determine the separation. Whether this number is used is determined by how close to the average it is. Usually this number is so much higher than the average that it's considered too disproportionate to be considered. If the average and this number are relatively close, then it carries more weight.

    Now for the analysis:


    Note: Too much analysis may become counter-productive. In many cases it's more important just to get the resize done, rather than worrying about what separation to use. If you feel this is your case, just use a separation of 4 when the average sized item is smaller and 8 for those that are larger on average. Also use a filetype of 18, which seems to be very efficient, unless a case can be made to use dynamic files (type 30). If you choose to take this route, you may not need to pay much attention to the next section, however, you accept all responsibility.

      A. Find the best SEPARATION. Use the Average from e. above as the rule, but consider how close it is to the minimum and maximum in f. and g. In this case, it's closer to the minimum than the maximum, so we can surmise that the maximum is an exception rather than the rule. The rule of thumb for determining the separation is to fit somewhere between 3 and 10 records per group. At 3700 bytes per record, we can determine that the separation might best be 16, which makes each of the primary buffers in the file (blocksize) 8192 bytes. This is not unusual for the MEMO file. If you follow this process on the FISCAL file, you will find different results.

      B. Calculate the correct MODULO based on the SEPARATION from step A. above. For this number, divide the Number of data bytes (37423452) by the SEPARATION (16 * 512, or 8192). For this example, that would equal 4568.29.

      C. Add the growth percentage the client provides. This example assumes 20 percent growth between now and the next resize in say, six months. So, multiply 4568.29 by 1.2. No, wait. I usually consider the oversized records and overflow area which the large records will require, so I typically add about 10% to the filesize myself. So, take 4568.29 and multiply it by 1.30 which will give you a modulo of 5938.777, or 5939.
      NOTE: If you are going to add a large amount of data to a file that is currently sized appropriately, you will need to calculate the percentage increase of the new data in relation to the existing data. After that, calculate the modulo accordingly. I.e.: if you are adding 5,000 records to the file in our example, which holds 10,125 records, multiply 4568.29 by 1.6, which will increase the size of the file by half, plus a 10% growth allowance.

      D. Calculate the PRIME number. At TCL, type PRIME 5939. You will get the following response:

      >PRIME 5939
      Next lower prime number: 5939.
      Next higher prime number: 5939.
      

      Now that isn't going to happen very often (having the number you calculated be a prime number). Okay, back to work... We now have the sizes to use when we perform the RESIZE command.

  5. RESIZE the file.
  6. But first, you must determine if there's enough disk space in the filesystem or partition where the current file resides. When you resize, uniVerse will create a temporary file that starts with resize..., when I ran it for this example, the name resize9e6151 (obviously the name of yours will have different numbers or letters after the word resize). That file will be the new file when the process is completed, so you must have room for both the new file and the old file to be successful. Here's the commands you would use if you have enough room:

    >RESIZE MEMO * 5939 16

    where the *' tells the process to keep the current parameter. In this case, that's the filetype. In our example, that's a type 18, which is fine.
    If you don't have enough disk space on the filesystem or partition, but do on another filesystem or partition, here's the command you would use:

    >RESIZE MEMO * 5939 16 USING /u5
    
    
    You must have checked and know that /u5 has plenty of space for the new file.

    NOTE: The USING switch on Microsoft platforms would have D:\directory, rather than /u5.

    I.e.:
          RESIZE MEMO * 5939 16 USING D:\WINNT\TEMP\


    The process will create the new file, either in the same directory as MEMO or the directory specified after the USING keyword. After the file is created, the process begins copying data from the old to the resizeNNN file. It does this by copying an item to the new file from the old file. When each item is copied, a flag is set in the old file so the process knows where to go next. The groups are moved through in sequential order setting these resize flags, until all the records are copied. Then the old file is deleted and the new file is moved into place, being renamed from resize9e6151, in this sample, to MEMO.

    NOTE: If you run the resize as a superuser or administrator, you must check the permissions to verify that the new file has read/write permissions set appropriately for the users that will need access. This is especially true on Hewlett Packard (HP/UX) Unix systems.

    Depending on how large the file is, what the new file type is, if that's being changed, or what the separation is, this process could take a very long time. That's one reason I recommend keeping files to a manageable size. It might become necessary to distribute files to make that possible.

    Should you need to interrupt the resize for any reason, including those out of your control, the following command must be run on the file before it can be used:

    filepeek MEMO

    which will give some information about the file, then prompt with
    Addr:

    where you will type RCL to reset the resize bits. It will prompt you to enter Y to continue or N to stop. Enter Y.

    You must be superuser to execute filepeek. Do not do anything else in this program because it alters data at the disk level and can't be undone. Fixes resulting in misuse of this program will be billable at the current emergency coverage costs. Please don't abort the resize unless absolutely necessary. You will probably lose an item or two if you do. If the process is aborted by a power outage, you can expect multiple corruptions in the file and perhaps the loss of many records.

  7. Go home and eat dinner.
  8. You're finished.

Sunday, September 7, 2014

Abs do not verify



"ABS FRAME MISMATCHES - EXPECTED FFFF FOUND FFFF."

This message occurs when ABS in memory does not match the DATA ABS, due to ABS corruption. This corruption could be in the DATA ABS, or in memory.

To correct this problem, clear the DATA ABS and SEL-RESTORE the DATA ABS from the Pick DATA diskettes or from a pseudo floppy. Reload the VIRTUAL ABS (ABS Diskettes #1 & #2 or Pseudo Floppy) from the 'A' option.

WARNING: This process will require reloading any ABS Patches and custom ABS.

PROCEDURE:

1. From the DM account at TCL type: clear-file data abs 

2. SET-FLOPPY (AH or SET-DEVICE device#  
(depending on the drive you will be loading your Pick Data files from).

3. If Pick was loaded from diskette then, insert the original Pick data #1 diskette.

4. Type SEL-RESTORE ABS (I , choose F)ull, Account name on tape = DM Filename = ABS

5. Insert the DATA #2 disk when prompted.

6. From TCL in the DM account type VERIFY-SYSTEM

7. If the system still does not verify, Shutdown and reboot the system (with the System #1 disk or Monitor patch if you have a NATIVE or PRO system), at the option menu choose A)BS and reload ABS using ABS #1 & #2 diskettes or the ABS file listed in the display.

8. The system should verify now.

9. Reload any ABS patches and custom ABS that were previously loaded.