Tuesday, April 23, 2013

Letter to my Senator regarding CISPA (H.R. 624)


I fully acknowledge that I've likely missed some badness in CISPA, but here's the letter with the issues I see in it; faxed to my Senator this morning:

April 23, 2013

The Honorable Bill Nelson
716 Senate Hart Office Building
Washington, DC 20510

Dear Senator Nelson:

As both a consumer and a small business owner who relies on the Internet for my livelihood, I took some time to read through H.R. 624 (CISPA) and found it very alarming. I hope that you feel the same way after reviewing it, and I urge you to vote nay on H.R. 624. I've included my thoughts below.

The legislation is vague. I cannot determine, via the bill’s language, that there is any stipulation as to who shares with whom. I.e., it appears that a company can share user data with another company, a company can share user data with a government agency, and that government agency can share that data with other agencies.

The legislation purports to help combat cybercrime, but there really isn't much language in the bill specific to cybercrime. The bill appears to be written such that a government agency could use this shared data as a type of a dragnet in order to prosecute crimes wholly unrelated to the purpose at hand without oversight, a warrant or probable cause--and far into the future; data procured a year ago could be used to prosecute a crime 5 years in the future. I see nothing in the law that will help combat Chinese-cybercrime as Rep. Rogers states again and again; in fact, a mechanism already exists for companies to share data with the FBI in the course of conducting cybercrime investigations, the NCFTA.

Which leads me to: The legislation contains no guidelines or requirements for data security, retention, or destruction. The NCFTA, to my knowledge and research, appears to have the capability and knowledge to secure information transmitted to it such that it doesn't get lost or shared outside of the need-to-know organizations. And even there, given the breach that occurred on September 3, 2012 (12 million iPhone users' personal data breached, hackers alleged this data was pulled from an FBI agent's laptop), I am not wholly confident that the procedures that are currently used are secure enough. I am very afraid that, if companies can share willy-nilly with government agencies, other agencies may not have the training nor will have the training to protect that data. (In fact, we will end up giving China and other foreign powers--as China is surely not the only threat to our national security--more than they ever wanted through oversharing and carelessness!)

The legislation contains no information about employee auditing and access. Given that data sent to agencies can, within the construct of this law, be added to a shared database that can be used in perpetuity and by multiple agencies, there is no thought to how this data can be misused and abused. Contractors who may have access to this database and government employees (and foreign spies who may find their way to access) can use this data for nefarious purposes or to target specific individuals outside of the scope of the law. (A story I read in the Washington Post many, many years ago, about a woman who spurned an IRS agent's advances, and was promptly audited, quickly comes to mind here.)

The legislation allows corporations to break their own privacy policies and the data privacy laws of other countries with no recrimination or accountability and no notification or recourse for those whose information is being brokered and sifted through. This is bad for consumers, and bad for US business. A large number of our customers are from outside the US - with the bulk being from the European Union, which, as you may know, has much stronger privacy laws (while we are attempting to strip user privacy, the EU is attempting to increase privacy). This impact may be felt much more across the board as other small and medium sized businesses will see their taxable income decline. In addition, US companies who operate in other countries are still subject to those countries' privacy laws - when Google, for example, shares a large email database with the United States which contains a European customer's data, they may be breaking EU laws and thus may be prosecuted and fined. Google knows what they're doing here and can absorb fines - but small companies may not, and may suffer business-destroying repercussions.

Some businesses have already discussed moving overseas to escape the hand of CISPA legislation - this would again have deleterious effects on our national GDP, which needs all the help it can get.

In short, this legislation is dangerous - to consumers and to US small business. Please side with us in this fight.

Sincerely,

Jen H

Saturday, September 25, 2010

Writing custom OSSEC rules for your applications

Our team recently implemented a proprietary security component for a web app we maintain. When it performs an action of note, the component writes the action to a log. As a system admin and tester babysitting a new component, I want to know about these actions when they happen, and this sounded like a perfect use case for OSSEC, an Open Source host-based intrusion detection system.

OSSEC monitors system logs, checks for rootkits and system configuration changes, and does a pretty good job of letting us know what's happening on our systems. OSSEC provides a slew of helpful components and rules for commonly-used services, but of course, it can't parse our custom log files out-of-the-box. While setting our custom rules up, I thought I'd go ahead and document the process, as I was having trouble finding a comprehensive beginning-to-end tutorial (this will also help me when I forget it later, of course).

Step 1: Add the log files you want to monitor to ossec.conf


Open up /var/ossec/etc/ossec.conf and, near the end of the file (before </ossec_config>), add the following:

<localfile>
  <log_format>syslog</log_format>
  <location>/var/log/my_app_log.log</location>
</localfile>

I used syslog here as it's recommended for log files that have one entry per line. Available values for log_format are syslog, snort-full, snort-fast, squid, iis, eventlog (for Windows event logs), mysql_log, postgresql_log, nmapg or apache.

If you're monitoring log files that contain changeable dates, OSSEC understands strftime variables, so, for example, if your log file is /var/log/apache2/access.log.2010-09-25, you can set location to <location>/var/log/apache2/access.log.%Y-%m-%d.

Tip: You can render a strftime variable at the command line to verify it quickly. Just type date +%X at the command line, where X is the stftime variable. date +%Y-%m-%d gives us the string we need for our access logs, date +%s gives us Epoch time UTC.

Step 2: Create a custom decoder


OSSEC uses decoders to parse log files. After it finds the proper decoder for a log, it will parse out fields defined in /etc/decoders.xml, then compare these values to values in rule files - and will trigger an alert when values in the deciphered log file match values specified in rule files. These values can also be passed to active response commands, if you've got them enabled.

The log line I want to trigger an alert for looks something like this:

2010-09-25 15:28:42 WARN ForceField IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!


Open up /var/ossec/etc/local_decoder.xml (you can also use decoder.xml, which already exists, but using local_decoder.xml will assure that you don't overwrite it on upgrade). First, we want to create a decoder that will match the first part of the log entry. We'll use the date and first few characters to grab it using a regular expression. Note that OSSEC has its own sort of interpretation of regex, so don't try to get fancy. I spent a lot of time pulling my hair out after using \d{4} type regex syntax - think simpler and you'll have more success: you have to use \d\d\d\d instead.

In the following decoder, we start at the beginning of the line (^), then match the digits in YYYY-MM-DD HH:MM:SS. After the date and time, I may have a few different log levels listed, INFO, WARN, DEBUG, etc., so I'll just match any number of characters greater than 0 (\w+). We also want to end on something relatively unique since the log level regex I used is so loosy-goosy, and I know this is a ForceField alert and all ForceField alerts will contain ForceField, so I'll use the following.

<decoder name="forcefield">
  <prematch>^\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d \w+ ForceField</prematch>
</decoder>

Let's take a break here, and see if this triggers our alert. Save and exit local_decoder.xml, then run /var/ossec/bin/ossec-logtest.

When it comes up, paste your log line:

2010-09-25 15:28:42 WARN ForceField IP:127.0.0.1@script_x: forcefield on; enabled forcefield

**Phase 1: Completed pre-decoding.
full event: '2010-09-25 15:28:42 WARN ForceField IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!'
hostname: 'my_system'
program_name: '(null)'
log: '2010-09-25 15:28:42 WARN ForceField IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!'
**Phase 2: Completed decoding.
decoder: 'forcefield'

You should see forcefield show up as the decoder. Great! Now, let's parse out the values we care about.

Re-open local_decoder.xml and, beneath your forcefield decoder, create a new decoder:

<decoder name="forcefield-alert">
  <parent>forcefield</parent>
  <regex offset="after_parent">IP:(\d+.\d+.\d+.\d+)@(\w+): (forcefield \w+); (\.*)</regex>
  <order>srcip,url,action,extra_data</order>
</decoder>

So, what'd we do here?

The obvious stuff first: We gave it a name, and designated forcefield-alert as a child of forcefield. Whenever a log matches the forcefield decoder, it'll then be decoded using forcefield-alert to extract the data fields to match on.

Now for the fun stuff...First, we set the offset to "after_parent" - this means that OSSEC starts looking for matches after the 'prematch' stuff (date, time, & ForceField) we specified inside the parent forcefield.

So our log line actually looks like this:

2010-09-25 15:28:42 WARN ForceField IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!

But after extracting the pre-match data, our log line, in OSSEC's brain, looks like this:

IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!

So what do we care about? What fields do we want to test again? A good rule is to decode any data that you want to match inside a rule as well as any data you might need to initiate an active response. I set these items to bold below:

IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!

OSSEC only allows specific field definitions. These can be found at the top of the local_decoder.xml file. For the purposes of our log file, we'll want the IP, the script, the action taken by the system, and the additional data.
When creating the regex for OSSEC, we extract all data inside parenthesis, so we build our regex like this:

IP:(\d+.\d+.\d+.\d+)@(\w+): (forcefield \w+); (\.*)

Then, to specify which parenthetical regex is which field, you add the <order> line, using available fields in decoders.xml:

<order>srcip,url,action,extra_data</order>

Save your local_decoder.xml and let's run the log file through ossec-logtest again.

ossec-testrule: Type one log per line.
2010-09-25 15:28:42 WARN ForceField IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!
**Phase 1: Completed pre-decoding.
full event: '2010-09-25 15:28:42 WARN ForceField IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!'
hostname: 'my_system'
program_name: '(null)'
log: '2010-09-25 15:28:42 WARN ForceField IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!'
**Phase 2: Completed decoding.
decoder: 'forcefield'
srcip: '127.0.0.1'
url: 'script_x'
action: 'forcefield on'
extra_data: 'enabled forcefield arbitrarily!'

Looks good! It found our decoder and extracted the fields the way we want 'em. Now, we're ready to write local rules.

Step 3: Write custom rules


Open /var/ossec/local_rules.xml and add rules. First, we create a group, and a "catch-all" rule to run against any log that is decoded by our forcefield decoder. We set this as level 0 because we don't want it to trigger an alert:

<group name="forcefield">
  <rule id="700005" level="0">
    <decoded_as>forcefield</decoded_as>
    <description>Custom Forcefield Alert</description>
  </rule>
</group>

Next, we add dependent rules that trigger if the action matches what's specified in the rule. <if_sid> specifies the dependency:

<group name="forcefield">
  <rule id="700005" level="0">
    <decoded_as>forcefield</decoded_as>
    <description>Custom Forcefield Alert</description>
  </rule>
  <!-- Alert if forcefield enabled -->
  <rule id="700006" level="12">
    <if_sid>700005</if_sid>
    <action>forcefield on</action>
    <description>Forcefield enabled!</description>
  </rule>
  <!-- Alert if forcefield disabled -->
    <rule id="700007" level="7">
    <if_sid>700005</if_sid>
    <action>forcefield off</action>
    <description>Forcefield off!</description>
  </rule>
  <rule id="700008" level="14">
    <if_sid>700005</if_sid>
    <action>forcefield hyperdrive</action>
    <description>Forcefield in hyperdrive, watch out!</description>
  </rule>
</group>

Save your local_rules.xml file, and let's test it again:

ossec-testrule: Type one log per line.
2010-09-25 15:28:42 WARN ForceField IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!
**Phase 1: Completed pre-decoding.
full event: '2010-09-25 15:28:42 WARN ForceField IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!'
hostname: 'my_system'
program_name: '(null)'
log: '2010-09-25 15:28:42 WARN ForceField IP:127.0.0.1@script_x: forcefield on; enabled forcefield arbitrarily!'
**Phase 2: Completed decoding.
decoder: 'forcefield'
srcip: '127.0.0.1'
url: 'script_x'
action: 'forcefield on'
extra_data: 'enabled forcefield arbitrarily!'
**Phase 3: Completed filtering (rules).
Rule id: '700006'
Level: '12'
Description: 'Forcefield enabled!'
**Alert to be generated.

Cool - now we're ready to restart OSSEC and check alerts. When restarting OSSEC, you may find that the new log file that you're using should exist before you restart OSSEC--if it doesn't find it, it ignores it. Also, when writing your own rules, set levels specific to your OSSEC deployment - for example, if you've enabled active response and want to trigger it, make sure you extract the srcip using your decoder and set the level in the rule to match the level specific to your active response command in ossec.conf.

You'll probably find that you need to do some tuning, and that some of the alerts you receive will trigger unwanted alerts if they fall through the decoder sieve. I haven't figured out a way to exclude the file from inspection if it fails to match any decoder (if you know of one, let me know!), but the solution I've used is to create a new local rule that matches based on the syslog sid and match, like so:

<rule id="100009" level="0">
  <if_sid>1002</if_sid>
  <match>Some string in the log I don't want to see</match>
  <description>Don't syslog alert on this one</description>
</rule>

Repeat for each false positive. It'd be really useful to only allow a single decoder to work on a log file - if anyone knows how to do that, let me know!

Friday, August 20, 2010

Goofing with Audio

Husband's doing some speech analysis stuff, and introduced me to sox, a self-described "Swiss Army Knife of sound processing." I was goofing with it today to convert spectrograms of mp3s to animated GIFs. This is quick and very dirty, but should work.

Requirements

On Ubuntu, install sox, the mp3 plug-in for sox, and imagemagick:

sudo apt-get install sox libsox-fmt-mp3 imagemagick

Running

  1. Copy the text of this crappy script into a file:
    #!/bin/bash
    # Get input file
    audiofile=$1

    if [ -z $1 ]; then
    echo "No mp3 file provided. Use ./makemeasammich /path/to/mymp3.mp3"
    exit 1;
    fi
    # Get seconds in audio file
    s=`sox "$audiofile" -n stat 2>&1 |grep Length |awk '{print $3}'`
    seconds=`echo $s/1 |bc`
    echo $audiofile is $seconds seconds long
    slice=0
    while [ $slice -lt $seconds ]
    do
    echo "Processing seconds starting at: $slice"
    sox "$audiofile" -n remix -r trim $slice spectrogram
    mv spectrogram.png $slice.png
    slice=`expr $slice + 9`
    done
    for i in `ls *.png`; do convert $i $i.gif; done;
    #use colors 32 to compress a bit
    convert -colors 32 -delay 100 -loop 1 *.gif $audiofile_animated.gif


  2. Run the script like so:
    sh makemeasammich.sh /home/jenisgoofy/mysong.mp3

    And you should get something like the following, except it'll be larger and animated (BlogSpot doesn't support animated GIFs and I'm too lazy to copy this anywhere right now):

    You can change the convert line to change animation/looping settings; removing "colors -32" will give you better quality (much larger filesize).

Friday, January 29, 2010

Why I love Voice IM

During Voxilate's development and honing of HeyTell Voice Messenger, I've thought a lot about how I use (or don't use) voice mail. While a push-to-talk solution like HeyTell is definitely a great replacement for text messaging/SMS, it's also a lot more efficient than a phone call for a lot of simple, immediate use cases like:

- Where are you? Oh, there you are. (Especially with the geolocation...)
- I'm okay, I wasn't in that building.
- You at the store? Can you pick up some milk? [5 minutes later] Coffee, too!!

The thing with voice mail, too, is that while I don't think of myself as a *very* lazy person, I find that I'm almost always too lazy to check it in a timely manner. Here's what checking voicemail requires of me, lazy person:

1. See the Messages icon on my phone.
2. Dial Voicemail.
3. Type in my passcode.
4. Wait.

If I want to listen to it again, I've got to go through it all again.

I'll be honest and share my typical way of handling voicemail:
1. See the Messages icon on my phone.
2. Look at the last Incoming number.
3. Call them back...thereby avoiding the use of voice mail altogether.
4. At some point when there's more time, cycle through *every* message and save or delete one-by-one.

The cool thing, I think, about HeyTell, is the immediacy of it. Click a button, send a message. Click a button, listen to the message. Save it for later or delete it. Replay it.

Also, I'm a shy person and not just a little bit socially awkward. Therefore, the telephone and I already have issues! When I have voicemail, I feel dread. Why? Something about picking the phone up again, dialing numbers, hoping I'm not interrupting whoever it was in the middle of whatever they are doing.

With HeyTell, the whole transaction is like an instant message - say something, wait for a response, and then respond when you're ready. I'm comfortable with IM. I can get my thoughts together before I type. I can work on other things and then return to it. It's nice to do the same thing with voice. I can reduce a little of that social awkwardness, have more control over the conversation, and stop pushing so many buttons.

Voice IM is really, I think, a best of two worlds combination - you get the intimacy of voice, the grokking of intonation and inflection, the speed and efficiency of not dialing, not typing, not waiting for rings and did you forget to dial that 1?, the ability to pull your thoughts and ideas together as coherently as you can in an email or instant message, and retain the ability to multi-task - a phone call requires your undivided attention. A voice IM? Respond when you can. Save and re-play it when you need to. Fantastic for productivity!

Combine it with location-awareness so that my contacts (and *only* those I allow access) can locate me in a crowd, and it's a win-win for me. Hope others feel the same!

Why does the No Free Bugs movement exist?

Having been *a little* bit involved with product development and testing in my time, and being kind of ultra-cognizant of security most of the time, I often wonder about the "No Free Bugs" movement and why it exists.

Why don't companies pay security researchers to find security holes in their products? It seems like a win-win to me.

- By paying the researcher in exchange for signing an NDA (that specifies no disclosure until there's a fix - with a fixed end date, of course!), you get more control over disclosure - less likely to have a pissed off researcher telling everyone about it, plus you've got legal recourse.

- The researcher gets cash, cred, *and* fodder for the security con circuit.

Win-win! Maybe I'm looking at it too simplistically? Is it that researchers don't want to do this? Or corporations don't want to bother? Or don't trust the researchers enough?

External auditing firms are great for CYA, but expensive and still do miss things. Seems to me like augmenting your 'professional' review and internal QA with a few scrappy, bright researchers who are highly motivated to break your security is the ultimate CYA when developing secure products. Each and every layer you can add makes your product stronger and secures yourself against liability.

---
Update - See, Google gets it!

Thursday, November 19, 2009

Who's attacking your web server today?

We're going to go a little off-book today for a segment I'd like to call, "Who's attacking my server today?"

I administer a few servers and they, like most anything connected to the Internet, are constantly under attack. Searching through my logs, I've seen a large number of pretty basic attacks trying to exploit a vulnerability in Parallels Plesk - a hosting control panel. If you're using hosting "in the cloud1," you're bound to see a lot of this sort of thing. Mostly automated. And often launched from "the cloud" itself!

Here's a little command line I've been using on my server to find out who's attacked today:


for i in `cat /PATH/TO/MY/ACCESS_LOGS/MYACCESSLOG_0911*.txt |grep login_up |awk '{print $1}' |sort -u`; do nslookup $i|grep "name = "|awk '{print $4}'|sed s/.$// ; done;


What this crude little command line does is search through all of my logs from November (insert path to your log file there), searches for accesses of login_up - which is a hallmark of people trying to access the Parallels Plesk control panel, grabs the IP from the front of the line (the awk '{print $1}', sorts it and removes duplicates (plenty of these as they scan!), looks up the hostname using nslookup, greps out the hostname, and removes a trailing . that shows up in nslookup output. Crude, yes, but it gets me a nice little list of baddies like:

mail.lib.ua.edu
theman.cba.ua.edu
174.36.240.16-static.reverse.softlayer.com
174.36.254.180-static.reverse.softlayer.com
raq2.raqdedicados.com
212-174-14-27.ip.ciklet.net
advertinet03.shawneelink.net
rrcs-24-43-133-99.west.biz.rr.com
maps.2hn.com
win-268.ourcp.com
JoshKraker.com
mail.thallium-dns.net
f2.2.5d45.static.theplanet.com
triton15.lifeofandyman.info
ai.imb.br
ip-72-167-38-177.ip.secureserver.net
ip-72-167-45-11.ip.secureserver.net
72.232.228.82.svservers.com
214.104.233.72.static.reverse.ltdomains.com
ehcsla.com
serversemidedicado.joinhost.com.br
win3a-mail.ixirhost.com
mail.infocesme.com
mssql.infocesme.com
mysql.infocesme.com
webmail.infocesme.com
www.infocesme.com
infocesme.com
ftp.infocesme.com
mail.mersinhost.com
mail.finalpazarlama.com
u15180592.onlinehome-server.com
host-84-51-38-150.teletektelekom.com
38.160.isimtescil.net
ubihost.net
win3a-mail.ixirhost.com
win3a.ixirhost.com
202.249.hostcini.com
ip-97-74-194-192.ip.secureserver.net

And there we go, a list of who's attacking our Web server today!



1"Cloud" is a fancy term we sometimes use; it too often just means "server is not in your basement."

Wednesday, September 30, 2009

Blocking Ads Can Save More Than You Think

Put yourself in a bad guy's shoes:

You have a piece of software that logs usernames and passwords to banking sites. It can do a number of other things, like propagate itself to other computers that share drives with the victims and open address books and email itself to every email address it finds - so that it can log usernames and passwords from even more sites!

It just needs to hit one system, really, to propagate. But as the guy or gal trying to get this software out and productively returning good banking credentials, if you had the chance to propagate more and better, why wouldn't you? How would you easily infiltrate as many computers as possible, and computers used by people who actually might have something in their bank accounts to reliably pilfer? You might want to take a look at something networked, something that gets propagated to a large number of mainstream sites. Because you may be found out quickly, you're looking for somewhere you might slip in surreptitiously, across a large number of trusted mainstream web sites simultaneously...

Ad networks. It's a pretty sweet attack vector, really. Massive, instantaneous, worldwide reach. Immediate impact. Solid customer base. Bi-partisan, even! Simultaneously force malware on readers of MSNBC and DrudgeReport and Salon, Washington Post, and CNN and more? Score!

A few weeks ago, the New York Times got hit with such an attack...and it wasn't stopped for at least 12 hours.

What can you do to protect yourself against drive-by ad attacks like this? Other than not check the news - because I'll be honest, I am going to read the Drudge Report no matter what, daily. Malware will not keep me away.

First thing: Don't install anything when prompted unless you yourself prompted the install and you know what you're installing. A virus scan initiated by a Web site you just hit? Close the window, don't click OK! And don't ever enter your password or allow the some unbidden installer elevated privileges!

Second thing: As much as it hurts the newspapers and advertisers right now, you can choose not to have the ads served using a few different methods. We'll talk about two quick and dirty methods today.

Ad Block Plus plug-in for Firefox

The AdBlock Plus plug-in blocks ads automatically. It blocks and hides ads from view. To install it:

  1. In Firefox, select Tools > Add-ons.
  2. Select Get Add-Ons, enter "Adblock" in the search window, and press Enter.
  3. Select AdBlock Plus and click Add to Firefox.
  4. Click Install Now and restart Firefox when prompted.
When Firefox restarts, you should see a red Stop sign icon in your Navigation toolbar - you can use this to make modifications to ad blockage.

Modify your hosts files so that all ad-based URLs redirect to your local system and *not* to the ad site!

Dan Pollock @ SomeoneWhoCares.org maintains a hosts file of known ad servers. You can replace the hosts file on your system with his list, so that whenever a web page requests an ad server, it redirects to your own system instead. Note that it doesn't hide the spots where the ads should be the way AdBlock Plus does - you'll see either whatever your local web server serves, or a failed to connect error if you aren't running a local web server. Basically - whatever you see at http://127.0.0.1 is what you'll see in the ad view boxes.

Copy his list at http://someonewhocares.org/hosts/ (or your own list, if you've been keeping score) and paste it into your own system's host file (note that you need to be root or Administrator to do this). In Linux, add the data from Dan's list to /etc/hosts. In Windows NT, 2000, XP, and Vista, add it to c:\Windows\system32\drivers\etc\hosts. In Windows 95/98 and ME, add it to C:\Windows\hosts.