" /> Status for Andrew DeFaria: October 2006 Archives

« September 2006 | Main | November 2006 »

October 31, 2006

Cloning done

  • Implemented cloning procedure for DMD/CQ

How to clone a parent CQ record to a child

You would think that this would be clearly documented in the CQ manual but it isn't. The requirement is clear enough - "implement parent/child relationships for a CQ record... Oh and when a child is created could you copy everything from the parent and we'll change what's different in the child".

Implementing a parent/child relationship is pretty clear and documented - basically you create a reference link in the current record back to itself. CQ even has a parent/child control that handles manipulating the relationship allowing the user the controls to link in existing records, delete a parent/child relationship or add a new record as a child of this record. But there is nothing in there about copying data from the parent to the child. This you must do with hooks. But how to code the hooks?

I found a method for doing this and implemented the following. The trick is to add pre and post hooks to the New button of the parent/child control. This button is selected when the user wishes to add a new child record to this parent. The pre action hook, created as a Record Script set a session wide variable saying "Hey I'm adding a new child record to this parent". This variable contains the ID of the parent. The following code accomplishes this:

my $session = $entity->GetSession;

$session->NameValue ("ParentID", $entity->GetFieldValue ("id")->GetValue);

After creating this record script add it as the pre-action hook for the new button. Don't forget to toggle the Enable for CQ Web (I don't really understand why you would ever not toggle that).

For the post-action script you are basically saying "Hey I'm no longer adding a new child record to this parent" with the following code:

my $session = $entity->GetSession;

$session->NameValue ("ParentID", "");

What this does is effectively bound the time you are in this unique situation - any other time the global session variable ParentID will be blank. Now the cloning can begin...

In the default value hooks for each field you want cloned place the following call:

CloneField ($fieldname);

This is written as a call to a Global Script since the code will always be the same and because you'll have to do this for each field that you wish cloned.

Finally, create the following Global Script:

sub CloneField {
  my $fieldname = shift;
    
  # Check session wide global, ParentID, and if set retrieve
  # the parent record and clone this field
  my $session  = $entity->GetSession;
  my $parentID = $session->GetNameValue ("ParentID");

  if ($parentID ne "") {
    # If ParentID is not blank then we are adding a subtask.
    # Copy this field from the parent to this new child.

    # Get the parent record
    my $parent = $session->GetEntity ("ChangeRequest", $parentID);

    # Set the child field
    $entity->SetFieldValue (
	$fieldname,
	$parent->GetFieldValue ($fieldname)->GetValue
    );
  } # if
} # CloneField

This script checks to see if the session global "ParentID" is not blank, indicating that we are in this special mode of adding a new child to an existing parent, and if so, gets the parent record and the field value based on the passed in field name. Finally it sets the field value based on the field name to that of the value of the corresponding parent's field value.

October 26, 2006

cclic_report/gpdb_putDesignsync bug

  • Created cclic_report.pl and cclic_report.sh and checked them into Clearcase. Need to find out how to release this code and then create the necessary cronjob
  • Fix bug in gpdb_putDesignsync. Turns out it didn't even try to link the Project to Designsync if the Designsync record existed before. Again, need to figure out how to release this
  • Created subtasks tab for DMD. Default functionality for parent/child relationships is already pretty complete. May need to populate new subtasks via a hook

October 25, 2006

gpdb_add_projects.pl v1.1

  • Checked in new gpdb_add_project.pl
  • Created cclic_report.pl and checked it into Clearcase

I've checked in a new version of gpdb_add_project.pl (v 1.1) - (/main/FORE/9). Changes include:

  1. Major changes cleaning up code
  2. Simplified some references
  3. Simplified updateProject
  4. Simplified addProject
  5. Removed debugging code
  6. Rewrote updateDesignSync to be more understandable
  7. Simplified updateGPDB
  8. Changed nslookup to redirect stderr. Need to improve error handling here where server name is no longer in DNS.
  9. Improved logging to specify the names of new users when they are added.
  10. Changed to handle both auto[_.]data and auto[_.]db cases in NIS
  11. Changed to use the GPDB Admin user (a00000000) to log into gpdb.
  12. Log messages improved to include site name

October 24, 2006

gpdb_add_project

  • Lots of re-writing of gpdb_add_project.pl to better handle error conditions
  • Investigated DB structures some more
  • Created cleardb.sql script to clear out the test db

October 19, 2006

GPDB login

  • Changed several gpdb modules to support a db parameter for logging into an alternate database
  • Changed gpdb-devel.pl to use new interface

October 18, 2006

The Oracle speaks...

  • Learned about sqlplus and how to speak to Oracle databases from Victor
  • Re-wrote the section about getting NIS data and managed to get gpdb_add_project to talk to Nice properly
  • Checked in working version of gpdb_add_project and the Rexec.pm module
  • Discovered that gpdb_add_project stumbles over some no longer existing machines in some DesignSync registries. Need to change this to send email
  • Updated my rc scripts to include support for Oracle
  • Working with Bill and Mike we determined that there is a test database for gpdb
  • Determined how gpdb.pm opens the database in an effort to teach it how to connect to a test database. It's currently an all or nothing thing. This needs to change.

October 17, 2006

PerlDB Tips

The Perl debugger is one of those valuable tools that surprisingly it seems few Perl coders know well. Here are some quick tips on using the Perl debugger. First a few explanations about commands I tend to use:

s
Single step. Step to the next statement stepping into any subroutines (where the source file is known and accessible).
n
Step over - if the line contains a call to a subroutine then this will step over that subroutine.
r
Return from subroutine - if, say you accidentally stepped into a subroutine or if you just want to return,
R
Rerun - start your Perl script again in the debugger with all the parms you started with.
q
quit
p <variable or expression>
Will print the contents of a variable or expression. Expressions can be Perl expressions including calls to subroutines. You can, for example, do "p 'There are " . scalar @foo . ' lines in foo';
x <variable or expression>
Like p above however p will simply print out HASH for hashes whereas x will format them out. Also x will print out "undef" for things that are undefined yet p will print nothing for them.
l (ell)
List the next windowSize lines (see below). Use "l <n>" where <n> = a line number to list that line.
v <n>
View lines around <n>
V <package>
List exported subroutines and variables for <package> (e.g. V MyModule will is all stuff exported from MyModule).
f <filename>
File - switch to another file. (e.g. f MyModule) and the debugger switches to viewing MyModule.pm.
c <n>
Continue to line <n>. If n is not specified then just continue until the next break point or the end of the script. Continue is like setting a temporary break point that disappears when you hit the line.
b <n> <condition>
Breakpoint - set a break point (or b <n> $name eq "Donna" which will break at line <n> iff $name is "Donna" (evaluated when the debugger gets to line <n>))

Also, at the Perl db prompt you can type in any Perl. So, for example, I often work out regex's that way. I'll be debugging a Perl script and stepping up to something like:

     10==> if (/(\d*).*\s+/) {
     11      print "match!\n";
     12      $x = $1;
     13    }

Then I'll type in stuff like:

     DB<10> if (/(\d*).*\s+/) { print "1 = $1\n"; } else { print "No
     match!\n"; }
     No match!
     DB<11>

Then I can use the command history (with set -o emacs at the shell before executing perl db emacs key bindings work for me) to edit and enter that perl if statement changing the regex until it works correctly. This way I know I got the right regex. Copy and paste the new, tested, regex from the debugging session into my code then "R" to reload the debugger.

Or you can say call an arbitrary subroutine in your script:

       DB<2> b Rexec::ssh
       DB<3> p Rexec::ssh
Rexec::ssh(/view/cmdt_x0062320/vobs/cmtools/src/misc/GPDB/bin/../../../../lib/perl/Rexec.pm:60):
     60:         my $self = shift;
       DB<<4>>

The "p Rexec::ssh" says to print the results of the following expression. The expression is a function call in to the Rexec module for the subroutine ssh. Since we just set a break point there in the previous debug command we break at the start of that subroutine and can then debug it. Note you don't want to "c Rexec::ssh" because that would continue the actual execution of your script and only stop at Rexec::ssh if that routine was actually called. Viola, you just forcefully caused the Perl interpreter to branch to this routine!

Another thing I'll frequently do is set or change variables to see how the code would proceed if the variables were correct (or perhaps incorrect to test error conditions). So let's say a forced execution of the subroutine Log like the above:

42      sub Log {
43:==>    my $msg = shift;
44        print "$msg\n";

  DB<23> s
main::Log(EvilTwin.pl:45):       print "$msg\n";
  DB<24>$msg = "Now I set msg to something I want it to be"
  DB<25>s
Now I set msg to something I want it to be
main::Log(EvilTwin.pl:47):              return;
  DB<25>

There are all sorts of good reasons to examine (p $variable) and set ($variable = "new value") variables during debugging.

Finally put the following into ~/.perldb:

     parse_options ("windowSize=23");

This sets the window size to 23 so that 'l" lists the next 23 lines.

October 13, 2006

gpdb_add_project.pl using gpdb user and Nice

  • Attempted to integrate Rexec into gpdb_add_project.pl and have it talk to Nice
  • Looked into problem with Cygwin, Samba and ssh

Rexec, gpdb_add_project.pl and Nice

I've been making some slow but steady progress with gpdb_add_project.pl. I've:

  • Implemented an Rexec Perl module that allows better access to remote sites. It does this by attempting ssh then rsh and finally telnet in an attempt to contact the remote site. It's object oriented and allows you to repeatedly execute remote commands without having to repeatedly login. Finally it can take a different username than the person running the script.
  • David then got me set up with a generic gpdb user for the Dallas and Nice sites.

In attempting to use the new generic gpdb user I encountered a few problems. The biggest difference isg pdb user is tcsh (and csh I think) oriented whereas the Rexec module assumes a Borne/Ksh/Bash orientation. This has caused a number of problems:

  1. When logging onto the system the prompt is different (csh style shells use "%")
  2. When logging onto the Nice site not only is the prompt different but it contains special characters. It uses embedded escape sequences that colorize the prompt. Rexec needs to find the prompt so it knows when it can send commands. Needless to say this si problematic forRexec. For now I set the prompt for gpdb@Nice to simply "% ", which works.
  3. Some of the commands that gpdb_add_project.pl issues are decidedly Borne shell oriented. For example, it uses 2>&1 to combine stdout and stderr. This syntax is not valid under csh style shells. Additionally, Rexec would wrap commands in an "echo start; <cmd>; echo errono=$?" in order to obtain the return status of the remotely executed command. The $? variable is not available in csh style shells. So I added a shellstyle parameter to Rexec to handle these differences (though that doesn't fix #2).

One way around all of these problems is to require generic service level accounts such as gpdb to run the default Borne shell (/bin/sh).

Next, and forgive me since my NIS is a bit rusty, but gpdb_add_project.pl would attempt to get certain NIS maps for remote sites that use NIS (it is also NIS+ aware/sensitive). In doing so it does an ls -1 /etc then looks for files such as auto_master. It then cat's auto_master and looks for lines that have "+auto" or "data" in them. It then uses that as a key file for ypcat as in ypcat -k auto_master.

Now @Nice (svrscity01.tif.ti.com) it finds:

% cat /etc/auto_master
# Master map for automounter
#
+auto_master
/xfn -xfn
/net -hosts      -intr,rw,grpid

So it then does ypcat -k auto_master which:

% ypcat -k auto_master
no such map in server's domain

The following does work though:

% ypcat -k auto.master
/clearcase auto.clearcase
/home_drp auto.home_drp -intr,ro
/apps_drp auto.apps_drp -intr,ro
/db_drp auto.db_drp -intr,ro
/user auto.user -intr,rw,grpid
/tool auto.tool -intr,rw,grpid,noquota,noatime
/home auto.home -intr,rw,grpid
/apps auto.tool -intr,rw,grpid,noquota,noatime
/xfn -xfn -noquota
/sim auto.sim
/net -hosts -intr,rw,grpid,noquota
/db auto.db
/u auto.tool -intr,rw,grpid,noquota

It appears to be trying to find the auto_data map, of which there are none, and then will look for "sync_custom" in there. As such I don't see how this ever worked at Nice.

Thoughts? Pointers?

Cygwin, Samba and ssh

Here's the story. I use Cygwin on my XP desktop. I like having a home directory on Windows that is the same home directory on Unix/Linux machines. Often companies offer access to your Unix/Linux home directory via Samba. Also, often companies do not bother to set up a Samba server wish participates in a domain, so the Samba server is configured as being in a workgroup.

Now for a long time I struggled with this. I would map //<samba server>/<home share> -> my H drive then mount the H drive as /home and make sure my Cygwin /etc/password referred to my home directory of /home/$USER. All is great.

But when dealing with Samba servers who are configured into workgroups innocuous activities in Cygwin would elicit permission denied messages. For example, touching a file in the home directory and indeed even vi'ing a file, etc. Creating a file within Windows Explorer or using other Windows oriented tools would work just fine. Files created on the Unix/Linux side would also work fine but when looked at from Cygwin on the PC would have odd (read "nobody") ownerships and permissions.

Of course as Cygwin is often not supported by the typical company's IT department and because many people do not attempt to utilize Cygwin fully often requests for assistance and change fell on deaf ears...

Eventually I figured out that my Windows SID in /etc/passwd is the SID of my domain user and since the Samba server was not in the domain my SID does not authenticate properly. Then I had a break through in that I realized that I was using SMBNTSEC as well as NTSEC in my Cygwin environment. I figured "Yeah I want to use the same Windows security for SMB mounted drives too". This is where my problem lies and it's because the Samba server configured by the client does not participate in the Windows domain from which I've logged in.

Now I'm pretty sure that Samba could be configured properly into a Windows domain as Samba can be configured as a PDC or a BDC, but many clients don't bother to go that far. So why is Windows able to deal with this but not Cygwin?

I believe that this is because within Samba a very basic approach is kept towards storing of user identification information. Indeed basic Samba just has an smbpasswd file which is much like your typical Unix/Linux /etc/passwd file and it is not designed to carry extra information about users and machine accounts as well as multiple groups and trust associations, etc. Even Samba documents talks about hooking Samba up to either LDAP or what they call a Trivial DataBase (TDB) in order to store such additional Windows only information.

So I thought the simple solution was to remove SMBNTSEC from my Cygwin environment and all would be fine. And indeed it is! Well almost...

Along comes ssh... So I like to use ssh to log into various Unix/Linux systems as I work. And again I share my home directory between Windows and Unix/Linux. Finally I like setting up passwordless public key ssh login as I'm not one of those who likes having to type in his password hundreds of times a day. But ssh's is picky about permissions of your ~/.ssh and ~/.ssh/id_<type> key files. When ssh'ing from Cygwin to a Unix/Linux box I am now receiving the following:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for '/home/x0062320/.ssh/id_rsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: /home/x0062320/.ssh/id_rsa
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for '/home/x0062320/.ssh/id_dsa' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: /home/x0062320/.ssh/id_dsa
x0062320@stashu's password:

And, of course, I need to type in my password again! What I believe is happening is that because my home directory is SMB mounted and SMBNTSEC is off then Cygwin reports that files like ~/.ssh/id_rsa are 0644 even if I change them on Unix/Linux to 0600. So, for example:

<unix box>$ ls -l ~/.ssh/id_rsa
-rw-------  1 x0062320 generic 887 Aug 31 16:43 /home/x0062320/.ssh/id_rsa

While:

<cygwin>$ ls -l ~/.ssh/id_rsa
-rw-r--r-- 1 x0062320 Domain Users 887 Aug 31 16:43 /home/x0062320/.ssh/id_rsa

Is there any way to work around this problem (short of reconfiguring the Samba server)?

October 10, 2006

Improved gpdp_add_project

  • Checked in improved gpdb_add_project.pl script
  • Started investigating Dave Smith's Monthly Matrix

I've gotten Michael's copy of gpdb_add_project.pl, which he placed into the cmtools/src vob, working now. Here's some of the changes:

  • Resolved all problems so that "use strict" could be used
  • Fixed bugs with tests for right spaced filled tests for siteName (e.g. $siteName eq "Dallas   ")
  • Changed code so that failure to contact a remote site simply reports a warning and moves on to the next site (previously it would die after the first failure).
  • Changed reading of gpdb_site_list.txt file so that it reads from the correct location.
  • Added Protocol, Username and Password columns to gpdb_site_list.txt (and reading of site file).
  • Changed all rsh calls to call an "Rexec" function that takes a hash of the site line in gpdb_site_list.txt. This is to allow alternate username/password per site1

Note that Rexec currently only supports the rsh protocol and the return code from this Rexec is the same as the return code for the rsh command itself, not the return code from the command that rsh is executing on the remote machine.

The thought with the Protocol column in the site file is to support additional protocols, notably ssh and telnet, in the future.

However, "rsh <machine> -l <user> command" doesn't prompt for password and you cannot supply password on the command line either1 ssh will prompt for a password. Telnet, of course, does not offer the ability to add a command. Finally specifying a password in a file is not very secure.

I've spent a little time coming up with a Perl module (strangely named Rexec) to implement remote command execution. Currently it is an object oriented approach that uses telnet and Expect to open a channel to the remote machine and execute commands returning the remotely executed command's status and output. I plan on extending it to first try ssh then rlogin and finally telnet. For the first protocol, if passwordless access is set up then no password is required. If it can't connect that way then it can fall back to rlogin or telnet and use Expect to drive the login and command execution (or optionally fail if no password is given). In general I think this would be a useful module to have for all.

Another benefit would be that the usage would be such that you establish a connection, execute potentially hundreds of commands, then destroy the connection - which would be a lot faster than say hundreds of ssh/rsh's which each need to login, do the command and logout.

Another optimization I see for gpdb_add_project.pl would be to have the remote machine cut down the amount of data needed to be sent over the network connection. For example, gpdb_add_project.pl uses a remote connection to essentially niscat the auto_data automount map, returning hundreds of lines (6395 lines or 622559 bytes). Then it loops through that array looking for lines that say "sync_custom" (actually it first sorts them - for no particular reason!). I propose that it instead does something like "niscat auto_data.org_dir | grep sync_custom" returning only 2 lines or 126 bytes and remove the needless sort(s).

Architecturally I think that what gpdb_add_project.pl does is interface between the outside world (in this case DesignSync) and GPDB. GPDB has a nice Perl module to allow consistent programmatic access to the output - GPDB (Thanks Michael). What would be nice is a Perl module that gathers all DesignSync info (which is what gpdb_add_project.pl does) allowing a consistent programmatic access to DesignSync info (and down the road - Clearcase). Then gpdb_add_project.pl could be greatly simplified to simply talk to these two architected interfaces.

October 9, 2006

Rexec

  • Finished recording gpdb_add_project to call a central rexec function so that remote execution can use a different username. However right now all it does is rsh, which can use another username but rsh needs to have remote passwordless login set up in order to work
  • Created Rexec.pm Perl object that has the ability to use Expect and telnet to log in remotely to a machine and execute commands, returning output and the status of the command executed remotely. Need to integrate this with gpdb_add_project...

October 5, 2006

Rexec

  • Fixed bug in gpdb_add_project.pl
  • Met about MultiSite Hardening Daemon (MSHD)

Rexec

The gpdb_add_project.pl script utilizes rsh to log into remote sites to execute and otherwise gather data from other sites (Actually it uses rsh locally too but that's another blog entry!). It does this with code like this:

unless ($site_registry_handle->open("rsh $siteHash{$siteName} -n cat $syncReg |"))  {
   print ("Can not open rsh $siteHash{$siteName} -n cat $syncReg |.\n");
   exit;
}

The problem is that the above code will not work. What is returned from the open call here is whether or not the rsh command worked - not whether or not the cat command worked! Indeed open of a pipe returns little - it's the close of the pipe that's going to return the status of the rsh! The status of the cat in this case is never returned. And in this case it was failing. There is no real reliable method for getting the status of the cat command except perhaps to process the remote session using expect or Perl/Expect.

Since we expect the cat to work and to return lines and because there is a specific format for the first line, I implemented the following:

sub GetDSRegFile {
  my $file	= shift;
  my $server	= shift;

  my @lines;

  if ($server) {
    @lines = `rsh $server -n cat $file`;

    return undef if $?;
  } else {
    @lines = `cat $file`;
  } # if

  if ($lines [0] and $lines [0] !~ /\#\# SYNC_VERSION 1\.0/) {
    return undef;
  } else {
    return @lines;
  } # if
} # GetDSRegFile

This solves the problem for this particular case. However this script uses this invalid technique for many other "remote calls" and has potential for error.

October 3, 2006

Redirecting on ErrorDocument

  • Looked into redirecting when ErrorDocument is called. Doesn't look like it'll work

I thought it might be possible to direct the users to the proper Clearquest schemea and Context ID by trapping on ErrorDocument. Normally when a document is not found (i.e. error 404) Apache will display the ErrorDocument associated with error 404. I thought that that document could be a script that looked at the referer and then looked up in the table to see if the user entered a "group". so, IOW, if the user entered http://server/CSSD I could look up CSSD and see it refered to a Cleaquest schema and redirect them. If the group lookup failed then I could simply display a 404 error.

But alas, referer (passed in via the environment variable HTTP_REFERER) is only set if the user is coming from an existing page. Here most often users are typing in the URL or have the URL bookmarked. In either case referer is not set, and if it were it would be of no help anyway because it would not be from http://server/CSSD (since that page does not really exist).