Software :: SHA1-Hashing Differs When Writing To A File?
Mar 9, 2011
it is about the program sha1sum to create SHA1-hashes. As you probably know, SHA1-Hashes do have the length 20 byte. So when I just type:
Code:
sha1sum myfile
it produces an output of
Code:
(some20byte) myfile
just as it should. Now I want to store the 20byte hash in another file, I use this command:
Code:
sha1sum myfile | awk "{print $1}" >> myhash
Unfortunately I'm not familiar with awk, but this should cut off the end of the sha1sum output, which is the name of the file again. The problem here is: The newly created file myhash has the size 41 bytes, and printing it out I can see that it is not the original hash (I wrote a little program to print it bytewise).
I am building an active directory and using BIND9 as my DNS. To allow for secure dynamic updates from the domain, I am enabling GSS-TSIG as detailed here and here. Unfortunately, some of the commands and configurations used here seem to be depreciated, at least in the newer versions that I'm using. My issue is one of keytab encryption. I generated a keytab using ktpass.exe on the Windows Server 2008 domain controller. I have tried DES/MD5, AES128/SHA1 and AES256/SHA1, each have been turned down by ktutil on the kerberos server (FreeBSD). Each time, it outputs the following error: ktutil: AES256/SHA1*: encryption type AES256/SHA1* not supported *Respective to encryption used.
I cannot find a list of suitable encryption schemes that ktutil will accept. The FreeBSD handbook details a means of producing a keytab file, but I'm not sure how to configure the Domain Controller to use the keytab.
I am running Intrepid Ibex 64-bit, and I have recently ripped a cd with EAC on one of my windows boxes, and it produced a cue sheet and one single wave file for the entire cd. My question is this, is it possible to take the wave file, burn an ISO image that acts exactly like the original cd? In other words, take the single wave file, using the cue sheet splitting it back into it's individual tracks, and then reading the subsequent ISO image from a program like Sound Juicer, so it can write all the metadata and convert it with my own preferences. I know this is a very strange predicament, but it's one that would save me a world of hassle.
I am trying to write into a single log file from 3 different applications in C. In one of the applications I am forking out 5 instances. I would like to know what is the best way to open and close the log file in which i want to write from these applications. Should I open and close it at the start and end of each application or is it ok performance wise if i open and close it inside the log function which will be called for every write.
I recently upgraded my file/media server to Fedora 11. After doing so, I can no longer copy large files to the server. The files begin to transfer, but typically after about 1gb of the file has transferred, the transfer stalls and ultimately fails with the message:
"Error writing to file: Input/output error"
I've run out of ideas as to what could cause this problem. I have tried the following:
1. Different NFS versions: NFS3 and NFS4 2. Tried copying the files to different physical drives on the server. 3. Tried copying the files from different physical drives on the client. 4. Tried different rsize and wsize block sizes when mounting the NFS share 5. Tried copying the files via a different protocol. SSH in this case. The file transfers are always successful when I use SSH.
Regardless of what I do, the result is the same. The file transfers always fail after approximately 1gb.
Some other notes.
1. Both the client and the server are running Fedora 11 kernel 2.6.29.5-191.fc11.x86_64
I am out of ideas. Has anyone else experienced something similar?
so this makes acsv file with one column. I want to run it again but rather than outputing to a new csv i want to add it to this one as the next column. For this example there will be 100 rows per column.
the 1st one will make the file
[grassGIS code]> /home/gary/AVE_monte_carlo/rstats_AVE.csv add ',' after each value the next one [grassGIS code] open file /home/gary/AVE_monte_carlo/rstats_AVE.csv
Is it possible to write ksh script in the spec file? The target is after I perform rpm -i my_rpm.rpm According to the spec file, ksh script will do some installation & configuration. For example run other script and edit some files.
Is it possible to forbid that more then one user open the same file in rw mode? In windows when you open a file that another user is using, there's ad advise and you have to open it in read only mode
I installed ubuntu 10.04 desktop edition on 3 pc (there is not a server-client architecture). I installed samba.(and smbfs)
put the strings: [name] comment = ... path = /... guest ok = yes read only = no create mask = 0777 directory mask =0777
Computers that access to that directory do (on boot, with root privileges) mount -t smbfs -o username="user",password="pass" //192.168.0.12/name /mnt/cartelladimontaggio
But if two users access to the same file, both are authorized writing on it! So changes made by one are lost when the other save.
I have custom software that writes to a sensitive large file when the user does something. I would like to make backup copies of The file that gets written to, but if I make a gzip of the file at the same time someone is changing something, it will corrupt the backup because some of the data will be missing, as its backed up during being written to.
a) Is there a way to detect if a file is currently being accessed/written to? That way if its currently being accessed, I can just make the script wait until its done and then finally back it up.
b) Instead of backing up the large file while it has potential to get written to, would it be better to make a copy of the file first, then gzip the copy? This idea comes from the fact that gzipping the original takes 5-10 seconds, whereas making a copy only takes 1-2 seconds. The less time, the less chance of corruption.
c) Is there anyway to freeze a program or a file to stop it from being written to for an amount of time?
With a, b, and c together. The best solution I have to my problem would be a script that first detects rather the file is being accessed. If not, it would then freeze the file/program and then make a quick copy of it. Once the copy is created, it will unfreeze the original file/program and then go about gzipping the copy.
I have 3 c files(one of them forks out 5 instances) all writing to one log file. Now to avoid the confusion of opening and closing in each application or instance and running into a situation of not having closed a file I decided to open and close the log file inside the log function for each write.
So what I do currently is fopen, flock, fwrite, unlock, fclose for each write. All the log messages from all the files get written fine and there are no errors but I see a performance hit. The applications talk to each other using SHM(shared memory). So when I try to set a timer and check number of messages lets say I get X messages. Each time I remove or add a log call the number of messages changes. When it is a 1 sec or 5 sec timer it doesnt make a very big diff..few hundreds but when I check it over a longer period..every log call added decreases 1000 messages in count. So I want to know what is an efficient way of implementing the custom log across the application.
is it possible to write ksh script in the spec file? the target is after I perform rpm -i my_rpm.rpm according to the spec file , ksh script will do some installation & configuration for example run other script and edit some files
I have a very bad attempt at hashing the components of an tcp session to assign/locate the session in a hash table bucket. I am pretty sure that it has a very high collision rate and when there are a very large number of tcp sessions my application is having to search a long linked list to find the session within the bucket.
All the hashing functions I have found take a single string input where I need to input several integers and hash them into a single result. My guess is that any real hashing function is going to produce better results than what I am currently doing.
I am capturing the same traffic both on client and server. Using Microsoft Network Monitor on client and tcpdump on Linux server.
And when comparing the content of one specific frame captured on two machines, I see it different!
Precisely:
I have SIP communicator on client and opensips on server. Client is doing a REGISTER request to the server. This request contains "Contact:" header, which should contain an IP address of the machine.
When capturend on client, this header contains local private IP address of a machine with SIP communicator (which is 192.168.***). But when captured on server, this header contains the outer address of my provider's router.
I have 2 possible explanations:
1) Some unknown agent acting between my client and server
2) opensips itself is acting at a so low level, that the result of it's frame manipulation is visible by tcpdump.
I am interested if the second option is possible? opensips is a proxy, so does proxy able to manipulate frames directly in the medium?
If so, the is it possible to run tcpdump at a lower level, so that it see packets before opensips processing?
So it seems like the sorting algo. for dpkg --get-selections is different than sorting algo. of 'sort' command when it encounters "-" (hyphen). How can I sort the original file (a.txt) in such a way that it produces the output file ,b.txt, exactly the same.
i am trying to create a floppy boot disk as my computer doesnt support booting from cd. I have downloaded ntrawrite and placed it in a file with the sbm file and followed these instructions [URL] which i found here.
when i type in the command i get the response "ntawrite is not recognised as an internal or external command"
Is the problem that im not opening the cmd in the right directory? I dont know how to change this.
I am trying to read a file character wise and trying to write the same character to another file. In this process, I unable to read and write white spaces successfully to the new file. The script reads the white spaces but while writing the white space is lost. The section of the code, is given below. Please advice how can i read and retain the white space while writing to a new file.
Code:
if [ -s f_test.txt ] && [ -f f_test.txt ]; then echo "File Exists !!" while read -n1 char; do
My program need to monitor the foler to know which file under the folder is being opened/created for writing. I add the folder into watch list using inotify_add_watch, when a file -- say 'AA' -- is created, I'll get the event through read api call. But the inotify_event only have file name 'AA' and a event mask. these parameters can't help me to know how the 'AA' is created/openned. So I have to scan the /proc folder to get to know how is 'AA' created/openned. I don't think this is a efficient way, especially if there are lots of files are openned/created in a short time span.
There is the Archive::Zip I think I can use with Perl 5.10 but I don't know how. I don't want to read or write any files, just zip something in memory, with best compression, like
$text = "this is a test"; $zippedtext = &Zip($text); sub Zip {
i am working on this thread: [URL] if it is better to open a file every time i need to write to it or should i keep a file open the whole time and when i am done with the script, close it and sendmail it out?
Or i just thought of this: i could keep concatenating to a string and just sendmail when done.
OpenPGP Standard RFC 4880, not really a Linux Question, but as may be using GnuPG on Linux I thought I would ask here
The Modification Detection Code Packet is defined to use SHA-1, even though it does state in section 13.11. that this can be altered, and gives example methods. However this would cause interoperability, (q1)so I assume there is no standard method of doing this??
- How much of a threat do you believe this to be? Even though the SHA-1 hash is encrypted within the symmetrically encrypted integrity protected data packet.
I'm setting the hardware clock on RHEL 5.1 system using /sbin/hwclock --systohc. After setting the clock I issue a date command followed by a /sbin/hwclock --show from within a script to get fast resolution and I see that the hardware clock precedes the system time on average by .5 seconds. I would think the clock should be identical after setting.