AI, where to begin?

 

 

If you have any interest on AI and related topics you would know the amount of information out there is huge and it drove me crazy reading all the info with nowhere to begin!! So I made this summary for people who are lost and looking for a simple beginning. I will keep updating this post:

 

0) Just some general info

  1. Some basic terms here.
  2. A big picture of trends here.
  3. A messy list here.
  4. An auto generated list here.

1) Cognitive Assistant that Learns and Organises (CALO)

  1. Name: CALO (.A.K.A PAL)
  2. Fund: $22 M
  3. Start: 2003
  4. End: 2008
  5. Applications: Apple Siri, US Army’s CPOF …
  6. Access: ?
  7. Comment:
  8. Link: http://www.ai.sri.com/project/CALO
  9. Link: https://en.wikipedia.org/wiki/CALO
  10. http://www.businesswire.com/news/home/20030716005586/en/SRI-International-Awarded-DARPA-Contract-Develop-Cognitive

 

2) Open Cognito

  1. Name: OpenCog
  2. Fund: ?
  3. Start: 2008
  4. End: Ongoing
  5. Applications: ?
  6. Access: Open Source
  7. Comment:
  8. Link: http://opencog.org/project/
  9. Link: http://wiki.opencog.org/w/The_Open_Cognition_Project
  10. Link: https://en.wikipedia.org/wiki/OpenCog
  11. Link: https://github.com/opencog/opencog
  12. Link: http://www.artificialbrains.com/opencog

http://wiki.opencog.org/wikihome/index.php/AI_Documentation

3) Open AI

  1. Name: OpenAI
  2. Fund: $1 B
  3. Start: 2015
  4. End: Ongoing
  5. Applications: ?
  6. Access: Open Source
  7. Comment:
  8. Link: https://gym.openai.com/
  9. Link: https://en.wikipedia.org/wiki/OpenAI

4) TensorFlow (Google)

  1. Name: TensorFlow
  2. Fund: ?
  3. Start: 2015
  4. End: Ongoing
  5. Applications: Deep Dream, …
  6. Access: Open Source
  7. Comment:
  8. Link: https://www.tensorflow.org/
  9. Link: https://en.wikipedia.org/wiki/TensorFlow

5) Computational Network Toolkit (Microsoft)

  1. Name: CNTK
  2. Fund: ?
  3. Start: 2016
  4. End: Ongoing
  5. Applications:
  6. Access: Open Source
  7. Comment:
  8. Link: https://cntk.ai/
  9. Link: https://github.com/Microsoft/CNTK

6) Deep Learning for JVM

  1. Name: DeepLearning4J
  2. Fund: ?
  3. Start: 2015
  4. End: Ongoing
  5. Applications:
  6. Access: Open Source
  7. Comment:
  8. Link: http://deeplearning4j.org/
  9. Link: https://en.wikipedia.org/wiki/Deeplearning4j
  10. Link: https://github.com/deeplearning4j/deeplearning4j

7) Torch

  1. Name: Torch
  2. Fund: ?
  3. Start: 2002
  4. End: Ongoing
  5. Applications:
  6. Access: Open Source
  7. Comment:
  8. Link: http://torch.ch/
  9. Link: https://en.wikipedia.org/wiki/Torch_(machine_learning)
  10. Link: https://github.com/torch/torch7

Exporting Cassandra 2.2 to Google BigQuery

So we decide to move 5 years of data from Apache Cassandra to Google BigQuery. The problem was not just transferring the data or export/import, the issue was the very old Cassandra!

After extensive research, we have planned the migration to export data to csv and then upload in Google Cloud Storage for importing in Big Query.

The pain was the way Cassandra 1.1 deal with large number of records! There is no pagination so at some point your gonna run out of something! If not mistaken, pagination is introduced since version 2.2.

After all my attempts to upgrade to latest version 3.4 failed I decide to try other versions and luckily the version 2.2 worked! By working I mean I were able to follow the upgrading steps to end and the data were accessible.

Because I could not get any support for direct upgrade and my attempts to simply upgrade to 2.2 also failed. So I had no choice but to upgrade to 2.0 and then upgrade it to 2.2. Because this is extremely delicate task I rather just forward you to official website and only then give you the summary. Please make sure you check docs.datastax.com and follow their instructions.

To give an overview, you are going to do these steps:

  1. Making sure all nodes are stable and there is no dead nodes.
  2. Make backup (your SSTables, configurations and etc)
  3. It is very important to successfully upgrade your SSTable before proceeding to next step. Simply use
    nodetool upgradesstables
  4. Drain the nodes using
    nodetool drain
  5. Then simply stop the node
  6. Install the new version (I will explain fresh installation later in this document)
  7. Simply do the config as your old Cassandra, start it and upgradesstables again (as in step 3) for each node.

Installing Cassandra:

  1. Edit /etc/yum.repos.d/datastax.repo
  2. [datastax]
    name = DataStax Repo for Apache Cassandra
    baseurl = https://rpm.datastax.com/community
    enabled = 1
    gpgcheck = 0
    
  3. And then install and start the service:
  4. yum install dsc20
    service cassandra start
    

Once you are upgrade to Cassandra 2+ you can export the data to csv without having pagination or crashing issue.

Just for the records, a few commands to get the necessary information about the data structure is as follow:

cqlsh -u username -p password
describe tables;
describe table abcd;
describe schema;

And once we know the tables we want to export we just use them alongside its keyspace. First add all your commands in one file to create a batch.

vi commands.list

For example a sample command to export one table:

COPY keyspace.tablename TO '/backup/export.csv';

And finally run the commands from the file:

cqlsh -u username -p password -f /backup/commands.list

So by now, you have exported the tables to csv file(s). All you need to do now is uploading the files to Google Cloud Storage:

gsutil rsync /backup gs://bucket

Later on you can use Google API to import the csv files to Google BigQuery. You may check out the Google documentations for this in cloud.google.com

Linux restricted administration

One of the challenges when adding a user in Linux environment is when you need to precisely define what they can or can not do. Some may find configuring authorisation in Linux a bit complicated. In my case I needed to add a user with privilege to execute some commands with sudo but without full root access.

The first thing is to create the user we want to customise:
adduser jack

and then create the key and later pass the id_rsa to him:

ssh-keygen -t dsa
mkdir jack/.ssh
chmod 700 jack/.ssh/
cp id_rsa.pub jack/.ssh/authorized_keys
chmod 600 jack/.ssh/authorized_keys
chown -R jack:jack jack/.ssh

Then you should edit the sudoers:

visudo -f /etc/sudoers.d/developers-configs
User_Alias DEVELOPERS = jack,john

Cmnd_Alias SEELOGS = /usr/bin/tail /var/log/nginx/*.error, /usr/bin/tail /var/log/nginx/*.access, /bin/grep * /var/log/nginx/*.error, /bin/grep * /var/log/nginx/*.access

Cmnd_Alias EDITCONFIGS = /bin/vi /etc/nginx/site.d/*.conf, /usr/bin/nano /etc/nginx/site.d/*.conf, /bin/cat /etc/nginx/site.d/*.conf

Cmnd_Alias RESTARTNGINX = /sbin/service nginx status, /sbin/service php-fpm status, /sbin/service nginx restart, /sbin/service nginx configtest, /sbin/service php-fpm restart

DEVELOPERS ALL = NOPASSWD: SEELOGS,EDITCONFIGS,RESTARTNGINX

We just create DEVELOPERS as alias for users and SEELOGS, EDITCONFIGS, RESTARTNGINX as alias of commands the user can excute; and then assigned SEELOGS, EDITCONFIGS and RESTARTNGINX privilages to DEVELOPERS. If you want users to be prompted for password you can remove the “NOPASSWD:” part.

Please note that depends on your OS you may need to add the user in “/etc/ssh/sshd_config” … for example “AllowUsers jack”.

WPA/WPA2 Cracking with GPU in AWS

DISCLAIMER: The information provided on this post is to be used for educational purposes only. The website creator is in no way responsible for any misuse of the information provided. All of the information in this website is meant to help the reader develop a hacker defence attitude in order to prevent the attacks discussed. In no way should you use the information to cause any kind of damage directly or indirectly. You implement the information given at your own risk.

In this post we are going to realise how practical it is to perform a brute force attack on a WPA or WPA2 captured handshake. A couple of years ago WPA/WPA2 considered secure but with the current power and cost of cloud computing anyone with slightest interest can setup a super fast server for brute force attempts with very cheap price (as low as $0.6 per hour!).

I am going to walk through my experiment and share the details and results with you. There are dozens of tutorials for this out there but this is just my own little experiment.

Brute forcing a WPA or WPA2 password begins with capturing the 4way handshake of the target WiFi. I am not going to go there as you can find a lot of solutions for that! I can only mention Kali toolbox which provides you the tools. So we will assume you got the WPA 4way handshake in handshake.cap file.

Continue reading

Free secure backup in MEGA

http://mega.nz/ is a secured cloud storage that gives away up to 50GB free space. Using this service is recommended due to very tight security as even the user will not be able to gain access to data if he lose the password (and lose the recovery key).

 

Mega provided some scripts for uploading, syncing and etc to their cloud. This is specially useful when it comes to cheap secure backup of your files. All you need to do is creating a free account for beginning and perhaps purchase a premium account for a better service.

First it is just appropriate to setup proper locale variables:

localedef -i en_US -f UTF-8 en_US.UTF-8

And then setup the dependencies (Amazon Linux):

yum groupinstall 'Development Tools'
yum install glib* openssl-devel libcurl-devel libproxy gnutls
rpm -i ftp://rpmfind.net/linux/centos/6.7/os/x86_64/Packages/glib-networking-2.28.6.1-2.2.el6.x86_64.rpm
yum update kernel
yum update

OK! So now we have all the required libraries in place, we can proceed to megatools installation:

wget https://megatools.megous.com//builds/megatools-1.9.95.tar.gz
tar xvf megatools-1.9.95.tar.gz 
cd megatools-1.9.95
./configure
make

In case you don’t want to pass the account username password with each command we can just save it PLAINTEXT in a file (it is safer from some aspects but it has the risk of unauthorised access to this file)

vi /root/.megarc:
[Login]
Username = your-email
Password = your-password

Just to test, run mega disk free script which gives you some info about space you have on the cloud:
./megadf

You can find some more commands details @https://megatools.megous.com/man/megatools.html

And at the end some examples:
./megaput file_to_upload
./megaget /Root/file_to_download

Monitoring SMTP server (using mailgraph, qshape and postqueue

Monitoring SMTP mails never been easier! You can check the number of sent emails, bounced emails, rejected emails and etc (you can see demo at http://www.stat.ee.ethz.ch/mailgraph.cgi).

Let’s get the source:

wget http://mailgraph.schweikert.ch/pub/mailgraph-1.14.tar.gz
tar xf mailgraph-1.14.tar.gz
mv mailgraph-1.14 /var/mailgraph

and install dependencies:

yum install perl-rrdtool
yum install perl-File-Tail

and run the mailgraph once to fetch the previous logs:

./mailgraph.pl -l /var/log/maillog -c -v

then run it as service:

cp mailgraph.pl /usr/local/bin/
./mailgraph-init start

Now the cgi file is needed to be executed by web server so once we installed the web server we will configure CGI configs. The main changes are AddHandler, Add ExecCGI to “/var/mailgraph” directory and add mailgraph.cgi as an index file like index.php.

yum install httpd
cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.original
vi /etc/httpd/conf/httpd.conf (appendix 01)
service http start
chkconfig httpd on

Now you should be able to browse the web server and see the stats. I have to note that the stats are not getting updated frequently. I coolant figure out how frequent this is getting updated but sometimes I had to run the mailgraph script manually to get latest stats. At this moment we are done with mailgraph.

Now we just take a look at shape and other CLI tools for monitoring SMTP. qshape and post queue are two useful scripts found in postfix additional scripts. We need to install it first:

yum install postfix-perl-scripts

And here are some example of the tool to get the stat of different queues:

qshape hold
qshape deferred
qshape active

postqueue is another tool:

postqueue -p
postqueue -p | egrep -c "^[0-9A-F]{10}[*]"
postqueue -p | egrep -c "^[0-9A-F]{10}[^*]"

we also can use the maillot directly with assistance of grep:

grep -c "postfix/smtp.*status=sent" /var/log/maillog 
grep -c "postfix/smtp.*status=bounced" /var/log/maillog

Continue reading

Deploying SMTP server using Postfix and OpenDKIM

Days ago we got some issue with Amazon SES and decide to make our own SMTP relay service. I tried the combinations of sendmail with dim-milter but for some reason I could not make it work so I start another server from scratch which did work this time. This post mainly focus on configuring OpenDKIM as Postfix is fairly straightforward.

The first thing to do is disabling sendmail which is the default SMTP client on Amazon Linux:

service sendmail stop
chkconfig sendmail off

And then installing Postfix and configuring it:

yum install postfix
cp /etc/postfix/main.cf /etc/postfix/main.cf.original 
vi /etc/postfix/main.cf (appendix 01)

Configuration mainly involves adding DKIM settings (such as milter socket info) and modifying receptions restrictions. In the new setting we define sender_access file to contain the list of senders who can relay through our SMTP service.

cat "DOMAIN.COM OK" >> /etc/postfix/sender_access
postmap /etc/postfix/sender_access

Now we are done with Postfix so it is better to check by sending a test mail.

service postfix start
chkconfig postfix on

The main concern of this post, DKIM, is to cryptographically validate the sender is really from that domain (i.e. domain.com). The validation process starts with the SMTP server signing the email using its private key and then the destination mail server tries to match the private key with the public key obtained from senders’ claimed domain DNS. Once the private and public key matched then it means the SMTP server is from the domain it claims to be.

To install the DKIM we need some API from sendmail and openssl:

yum install sendmail-devel openssl-devel

openDKIM is not available in default repository so we will add it and then install it:

rpm -Uvh http://mirror.pnl.gov/epel/6/i386/epel-release-6-8.noarch.rpm
yum --disablerepo=* --enablerepo=epel install opendkim

Once installation finished we proceed to configurations. There are 3 main files to configure: opendkim.conf contains the main configs such as address of signing table file, key table file and the socket address to listen. The key table file contains the list of keys. The signing table defines which domains should be signed by which key. You will also need to add trusted IP addresses of senders in /etc/opendkim/TrustedHosts to grant them access to SMTP.

cp /etc/opendkim.conf /etc/opendkim.conf.original
vi /etc/opendkim.conf (appendix 02)
vi /etc/opendkim/KeyTable  (appendix 03)
vi /etc/opendkim/SigningTable (appendix 04)

Now we need to generate a pair of keys (private and public) which the public key will be added into DNS records of send domain:

mkdir /etc/opendkim/keys/DOMAIN.COM
opendkim-genkey -D /etc/opendkim/keys/DOMAIN.COM/ -d DOMAIN.COM -s default
mv /etc/opendkim/keys/DOMAIN.COM/default.private /etc/opendkim/keys/DOMAIN.COM/default
chown -R opendkim:opendkim /etc/opendkim/keys/DOMAIN.COM
cat /etc/opendkim/keys/DOMAIN.COM/default.txt

Then start the service:

service opendkim start
chkconfig opendkim on

Finally, you have to add a TXT record in your DNS dashboard. The record name should be default._domainkey.DOMAIN.COM and it should contains something like the following (based on the /etc/opendkim/keys/DOMAIN.COM/default.txt):
“v=DKIM1; g=*; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHY7Zl+n3SUldTYRUEU1BErHkKN0Ya52gazp1R7FA7vN5RddPxW/sO9JVRLiWg6iAE4hxBp42YKfxOwEnxPADbBuiELKZ2ddxo2aDFAb9U/lp47k45u5i2T1AlEBeurUbdKh7Nypq4lLMXC2FHhezK33BuYR+3L7jxVj7FATylhwIDAQAB”

/etc/postfix/main.cf (appendix 01)

smtpd_milters           = inet:127.0.0.1:2525
non_smtpd_milters       = $smtpd_milters
milter_default_action   = accept

smtpd_error_sleep_time = 1s
smtpd_soft_error_limit = 10
smtpd_hard_error_limit = 20

smtpd_recipient_restrictions = 
	 permit_mynetworks
	 check_sender_access hash:/etc/postfix/sender_access
	 reject_unauth_destination

queue_directory = /var/spool/postfix
command_directory = /usr/sbin
daemon_directory = /usr/libexec/postfix
data_directory = /var/lib/postfix
mail_owner = postfix

inet_interfaces = all
inet_protocols = all

mydestination = $myhostname, localhost.$mydomain, localhost
unknown_local_recipient_reject_code = 550

alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases

debug_peer_level = 2
debugger_command =
	 PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin
	 ddd $daemon_directory/$process_name $process_id & sleep 5

sendmail_path = /usr/sbin/sendmail.postfix
newaliases_path = /usr/bin/newaliases.postfix
mailq_path = /usr/bin/mailq.postfix
setgid_group = postdrop
html_directory = no

manpage_directory = /usr/share/man
sample_directory = /usr/share/doc/postfix-2.6.6/samples
readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES

/etc/opendkim.conf (appendix 02)

Canonicalization        relaxed/relaxed
ExternalIgnoreList      refile:/etc/opendkim/TrustedHosts
InternalHosts           refile:/etc/opendkim/TrustedHosts
KeyTable                refile:/etc/opendkim/KeyTable
LogWhy                  Yes
MinimumKeyBits          1024
Mode                    sv
PidFile                 /var/run/opendkim/opendkim.pid
SigningTable            refile:/etc/opendkim/SigningTable
Socket                  inet:2525@127.0.0.1
Syslog                  Yes
SyslogSuccess           Yes
TemporaryDirectory      /var/tmp
UMask                   022
UserID                  opendkim:opendkim

/etc/opendkim/KeyTable (appendix 03)

default._domainkey.DOMAIN.COM:default:/etc/opendkim/keys/DOMAIN.COM/default

/etc/opendkim/SigningTable (appendix 04)

*@DOMAIN.COM default._domainkey.DOMAIN.COM