I set up a data project to burn a bunch of files onto a CD. I want to backup the files. I went to edit > plugins and selected "File Checksum" and set it to SHA1. Will Brasero now automatically (as part of the burning process) calculate and then compare the SHA1 for the file on the HDD with the file that is burned to the CD? If the two SHA1 comparisons do not match then Brasero will kick up a dialog window alerting me? Do I understand how this works correctly? as long as the plugin is enabled this is all automatic? Is the "Image Checksum" plugin only in use when I am burning an ISO?
I am trying to connect to the web interface found at [URL] using curl. This first requires login information to be entered at [URL], but I am having an issue with the login process. I am trying to submit the following form via POST:
Code: <form action="j_security_check" method="post" id="login_form" name="login_form"> <center> <table style="background: #cac1cf;FONT-SIZE: 12px;"> <tr> <td align="center" colspan="2">Please enter your username and password:</td> </tr> <tr> <td align="right">Username</td> <td> <input name="j_username" style="width: 250px" id="j_username" type="text"/> </td> </tr> <tr> <td align="right">Password</td> <td> <input style="width: 250px" name="j_password" id="j_password" type="password"/> </td> </tr> <tr> <td colspan="2" align="center"> <input value="Enter" name="enter" type="submit"/> <input value="Clear" name="Clear" type="reset"/> </td> </tr> </table> </center> </form> The command that I am using for this is the following:
Code: curl -c cookies -b cookies -L -d "j_username=user%40domain.com&j_password=pass" [URL] The command is properly formatted as far as I can tell. I tested it with another website using a similar authentication scheme using different POST variables specific to the form and it worked fine.
When I run the above command with the -v tag, it reveals this: Code: * Connected to lcl.uniroma1.it (151.100.4.74) port 80 (#0) > POST /sso/j_security_check HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: lcl.uniroma1.it > Accept: */* > Content-Length: 44 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 408 The time allowed for the login process has been exceeded. If you wish to continue you must either click back twice and re-click the link you requested or close and re-open your browser < Date: Sat, 29 Jan 2011 15:26:41 GMT < Server: Apache-Coyote/1.1 < Content-Type: text/html;charset=utf-8 < Content-Length: 1554 < Connection: close < { [data not shown] 103 1554 100 1554 0 52 5081 170 --:--:-- --:--:-- --:--:-- 10223* Closing connection #0
I cannot tell why the login timeout is expired when I try to do this, and my investigation toward this end has been fruitless. I saw a brief snippet on Google that vaguely suggested that the underscores in the domain name were at fault, but replacing these with their encoded counterparts did nothing to resolve the issue (that, and underscores should be fine when sent unencoded according to the standards). I have extensively perused the man pages and have come up with nothing to adequately explain this behavior. I also talked to a friend who has worked with curl in his line of work, but he mostly has experience in the context of PHP and has not dealt with this issue before. I am running GNU/Linux 2.6.35-22-generic-pae.
I was wondering if there was a Windows or Ubuntu way to limit the amount of data that is able to be sent over the internet between certain times, eg. Between the times of 7am and 7pm can only download 300 MB from the web, when this limit is reached the web is either disconnected or slowed down.
I have a Webserver (Co-Location) and all runs fine ... since last week. Now there are a lot of RX-ERR shown in netstat and ifconfig. And when I try to upload a external website direct on the server for example via wget, it is very very slow and hangs very often.
I have analyse the network but I was not able to find a problem. My hoster has checked the network and all looks fine. For example my hoster has plugged-in a pc in the same switch ... and was able to do wget (load external data, like websites) in normal speed.
Since last week my websites were delivered slower as before, too. It seemed there is a network-problem ... but how can I find it?
Actually I can install moduls ... but the server needs hours. So, if you knows a good command-line tool to analyse the network.
What does this lot mean:? Code: Hit [URL] Hit [URL] Hit [URL] Fetched 557kB in 2s (198kB/s) Reading package lists... Done W: GPG error: [URL] Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY EA8E8B2116BA136C W: You may want to run apt-get update to correct these problems (after typing apt-get update)
Elsewhere is my question about 11.3. I have had to abandon that and 11.3 as my RAM is too small. Now, for the first time in many years, I find it necessary to extract specific data from received .pdf files. According to OpenOffice, editing of these files is only possible from version 3.2. My concern is whether this later vesion will be compatible with 11.2.OpenOffice installed is 3.1.1.4-1.1.4-i586. The same question has been directed to their Forum.
My software sources some how got screwed up. Now when I try to install from software manager or Synaptic, I get error messages. Here is the error messages at the bottom of "sudo apt-get update" :
W: GPG error: http://archive.ubuntu.com lucid Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 40976EAF437D05B5 W: GPG error: http://archive.canonical.com lucid Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 40976EAF437D05B5
I use Ubuntu 9.04. Synaptic Software Manager > Repositories > Reload. I have a following error.
GPG error: [URL]... karmic Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 7804AF9D95075E6E. What can I do?
I used a self assigned certificate openssl req -new -outform PEM -out smtpd.cert -newkey rsa:2048 -nodes -keyout smtpd.key -keyform PEM -days 365 -x509 i followed configurations from th below website [URL].. On my outlook client p.c`s whenever connecting for the first it pops up a message telling mi tht my certificate on my server cannot be verified then it continues after click yes.
How do i do away with tht message other thn buying trusted certificate Or refer me to a good site with Ubuntu mail server configuration which makes uses of mysql
these are the lines in my /etc/dovecot/dovet.conf file ssl_cert_file = /etc/ssl/certs/dovecot.pem ssl_key_file = /etc/ssl/private/dovecot.pem
My SSL is not working, I get the error I just moved my site off GoDaddy, and on to my own server. I now get this error message. "SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long)"
I'm not sure what that means, or how to fix it. I used these same Security Certificates om my other server, and they worked just fine.
When I go to the Update Manager to update my system I get the following error message: W: GPG error: http://packages.medibuntu.org lucid Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 2EBC26B60C5A2783
I am not sure what this exactly means and what I need to do to fix this.
I installed the Centos 5.5 and after the Xen. After I put a virtual machine named VM01.Initially it worked properly, I tried everything and it worked.When rebooted, I had problems with the network.I have two network cards eth0 and eth1, but eth1 does not have any ip and I use only eth0.The error that appears is:
vif0.0: received packet with own address the source address
Recently I got a very rare database issue, it says: Error: Couldn't read status information for table clients_copy () mysqldump: Couldn't execute 'show create table `clients_copy`': Table 'adm_retail.clients_copy' doesn't exist (1146) Error: Couldn't read status information for table dt_mx_emp ()
I'veopenoffice.org 3.0. I tried to upgrade it to oog3.1 via terminal by the command sudo apt-get update && sudo apt-get install openoffice.org 3.1 my terminal shows; the lines above these are not shown W: GPG error: http://ppa.launchpad.net intrepid Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 632D16BB0C713DA6 W: You may want to run apt-get update to correct these problems Reading package lists... Done Building dependency tree Reading state information... Done E: Couldn't find package 3.1 I got the same results when I tried the commands with firefox 3.5.
Why everytime I send apt-get update I got this error on the last lines? Reading package lists... Done W: GPG error: [URL] Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8F91B0E6C862B42C Here is my /etc/apt/sources.list
After installing postfix on my server , all emails sent by a PHP class that i built , are received to spam folder , no matter what i do .am not an expert , except in PHP , the class i built works fine everywhere else except on this server , so i think the problem might be from the server it self ?some told me wrong configuration/software on my server , others told me wrong DNS stuff . actually i don't understand the DNS stuff , and am not an expert in linux softwares and services but i cann install/configure them , so could anyone please check the DNS for problems ?
I am using Postfix on CentOS to send mail through a relayhost. The mail sends fine, but on certain clients such as the mail client on the iPhone or Windows Live Mail, the message is received with '(No Sender)' in the sender field. The Apple Mail client on the iMac works just fine.
Here is the PHP code I am using to send the mail code...
Wondering if there is a configuration setting in /etc/postfix/main.cf I can use to fix this problem. I have scoured the help files but have come up empty.
We have a Wipro Net Power 8552 server on RAID 5 with Linux Advanced Server. The Databse is Oracle . Every time the reocovery is made the block corruption error gets changed. There are four hardisk in the Raid with 18 GB capacity of Quantum make. I want to make sure if any Utility is there in Linux to check the Harddisk if any Block corruption is there or Not.
If I have a hard-disk with Ubuntu fully installed on it and I want to - all of a sudden - use Windows XP, is it mandatory that I firstly format the hard-disk first? So far, I have used two versions of Windows XP; one of them is from a few years before the other. Both copies of Windows XP when in start-up from booting from CD are causing errors. One of them is a BSOD error (0x0000007B) and the next Windows CD is stopping and giving me an error with setupdd.sys (error code 4). Is it required that the hard-disk is formatted before you even put a Windows boot CD in?
i need to ask you about doing some redirect to some mail messages to other mail server outside my mail server, i need to deploy this feature as the message received by my server.also i need to keep a copy of these messages too.
I got this error during updating backend data server for evolution. "could not do simulate: gnome-panel-2.30.0-1.fc13.i686 requires libedataserver-1.2.so.11 : Success - empty transaction"
My parents have a 8 yr old computer, and they have been running linux for the last 2 years- our graphics card has been a nightmare. It has not worked right....since we first installed 7.10. In 9.04 it could still halfway play DVD's & videos (just barely). No longer in 10.04- Mplayer, totem, xine & VLC die when you try to play a video- any video.
And here's what I get(in terminal): The program 'totem' received an X Window System error. This probably reflects a bug in the program. The error was 'BadAlloc (insufficient resources for operation)'. (Details: serial 89 error_code 11 request_code 132 minor_code 19). (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the --sync command line option to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.)
I need help about the error in my website. I have the following error....
Code: user warning: Got error 134 from storage engine query: SELECT data, created, headers, expire, serialized FROM cache WHERE cid = 'theme_registry:database1' in /var/www/html/web/includes/cache.inc on line 26.