Tag: AWS

  • Fixing TXTRDATATooLong Errors for AWS Route 53

    RFC 4408 3.1.3 says

    ....
         IN TXT "v=spf1 .... first" "second string..."
     
       MUST be treated as equivalent to
     
          IN TXT "v=spf1 .... firstsecond string..."
     
       SPF or TXT records containing multiple strings are useful in
       constructing records that would exceed the 255-byte maximum length of
       a string within a single TXT or SPF RR record.
    

    so if you are getting error “TXTRDATATooLong” a solution for you will be splitting it into multiple strings within the same record set. For example, instead of:

    "v=DKIM1; k=rsa; g=*; s=email; h=sha1; t=s; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDx2zIlneFcE2skbzXjq5GudbHNntCGNN9A2RZGC/trRpTXzT/+oymxCytrEsmrwtvKdbTnkkWOxSEUcwU2cffGeaMxgZpONCu+qf5prxZCTMZcHm9p2CwCgFx3
    reSF+ZmoaOvvgVL5TKTzYZK7jRktQxPdTvk3/yj71NQqBGatLQIDAQAB;" 

    you can pick a split point where each part is less than 255 characters long and put [double quote][space][double quote] 

    for example I tried:

    "v=DKIM1; k=rsa; g=*; s=email; h=sha1; t=s; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDx2zIlneFcE2skbzXjq5GudbHNntCGNN9A2RZGC/trRpTXzT/+oymxCytrEsmrwtvKdbTnkkWOxSEUcwU2cffGeaMxgZpONCu+qf5prxZCT" "MZcHm9p2CwCgFx3reSF+ZmoaOvvgVL5TKTzYZK7jRktQxPdTvk3/yj71NQqBGatLQIDAQAB;"

    and as a result I’ve got:

    dig -t TXT long.xxxxxx.yyyy @ns-iiii.awsdns-jj.org.
    ;; ANSWER SECTION:
    long.xxxxxxx.yyyy. 300    IN      TXT     "v=DKIM1\; k=rsa\; g=*\; s=email\; h=sha1\; t=s\; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDx2zIlneFcE2skbzXjq5GudbHNntCGNN9A2RZGC/trRpTXzT/+oymxCytrEsmrwtvKdbTnkkWOxSEUcwU2cffGeaMxgZpONCu+qf5prxZCT" "MZcHm9p2CwCgFx3reSF+ZmoaOvvgVL5TKTzYZK7jRktQxPdTvk3/yj71NQqBGatLQIDAQAB\;"

    Note that returned TXT contains [double quote][space][double quote] , however the RFC above mandates that string to be treated as the same as concatenated one.

    Note that your example does the same too on 128 character boundary

    dig s2048._domainkey.yahoo.com TXT                                                                                                                                      /workspace/stepany-HaasControlAPI-development
    ;; Truncated, retrying in TCP mode.
     
    ; <<>> DiG 9.4.2 <<>> s2048._domainkey.yahoo.com TXT
    ;; global options:  printcmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61356
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 5
     
    ;; QUESTION SECTION:
    ;s2048._domainkey.yahoo.com.    IN      TXT
     
    ;; ANSWER SECTION:
    s2048._domainkey.yahoo.com. 61881 IN    TXT     "k=rsa\; t=y\; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuoWufgbWw58MczUGbMv176RaxdZGOMkQmn8OOJ/HGoQ6dalSMWiLaj8IMcHC1cubJx2gz" "iAPQHVPtFYayyLA4ayJUSNk10/uqfByiU8qiPCE4JSFrpxflhMIKV4bt+g1uHw7wLzguCf4YAoR6XxUKRsAoHuoF7M+v6bMZ/X1G+viWHkBl4UfgJQ6O8F1ckKKoZ5K" "qUkJH5pDaqbgs+F3PpyiAUQfB6EEzOA1KMPRWJGpzgPtKoukDcQuKUw9GAul7kSIyEcizqrbaUKNLGAmz0elkqRnzIsVpz6jdT1/YV5Ri6YUOQ5sN5bqNzZ8TxoQlkb" "VRy6eKOjUnoSSTmSAhwIDAQAB\; n=A 2048 bit key\;"

  • Configuring White-Label Name Servers with AWS Route53

    Create a Route 53 reusable delegation set

    aws route53 create-reusable-delegation-set --caller-reference ns-example-com

    Output:

    {
        "Location": "https://route53.amazonaws.com/2013-04-01/delegationset/N3PIG1YNLUZGKS",
        "DelegationSet": {
            "Id": "/delegationset/N3PIG1YNLUZGKS",
            "CallerReference": "ns-example-com",
            "NameServers": [
                "ns-30.awsdns-03.com",
                "ns-1037.awsdns-01.org",
                "ns-1693.awsdns-19.co.uk",
                "ns-673.awsdns-20.net"
            ]
        }
    }

    Note down the delegation set ID:

    /delegationset/N3PIG1YNLUZGKS

    Get IP of delegated name servers

    dig +short ns-30.awsdns-03.com
    dig +short ns-1037.awsdns-01.org
    dig +short ns-1693.awsdns-19.co.uk
    dig +short ns-673.awsdns-20.net
    dig AAAA +short ns-30.awsdns-03.com
    dig AAAA +short ns-1037.awsdns-01.org
    dig AAAA +short ns-1693.awsdns-19.co.uk
    dig AAAA +short ns-673.awsdns-20.net

    Then add these records with your domain registrar and in your current DNS providers. Set TTL to 60s.

    Create new zone with white-label name servers

    aws route53 create-hosted-zone --caller-reference example-tld --name example.tld --delegation-set-id /delegationset/N3PIG1YNLUZGKS

    Output:

    {
        "Location": "https://route53.amazonaws.com/2013-04-01/hostedzone/Z7RED47DZVVWP",
        "HostedZone": {
            "Id": "/hostedzone/Z7RED47DZVVWP",
            "Name": "example.tld.",
            "CallerReference": "example-tld",
            "Config": {
                "PrivateZone": false
            },
            "ResourceRecordSetCount": 2
        },
        "ChangeInfo": {
            "Id": "/change/C2IAGSQG1G1LCZ",
            "Status": "PENDING",
            "SubmittedAt": "2019-03-10T13:10:53.358Z"
        },
        "DelegationSet": {
            "Id": "/delegationset/N3PIG1YNLUZGKS",
            "CallerReference": "ns-example-com",
            "NameServers": [
                "ns-30.awsdns-03.com",
                "ns-1037.awsdns-01.org",
                "ns-1693.awsdns-19.co.uk",
                "ns-673.awsdns-20.net"
            ]
        }
    }

    Update NS and SOA records

    Prepare to change name servers, first lower TTL for the following records:

    • NS records: 172800 to 60 seconds
    • SOA record: 900 to 60 seconds

  • Install the AWS CLI with virtualenv on Gentoo

    First download virtualenv:

    wget -O virtualenv-15.0.3.tar.gz https://github.com/pypa/virtualenv/archive/15.0.3.tar.gz

    Extract virtualenv:

    tar xvf virtualenv-15.0.3.tar.gz

    Create the environment:

    python3 virtualenv-15.0.3/virtualenv.py --system-site-packages ~/awscli-ve/

    Alternatively, you can use the -p option to specify a version of Python other than the default:

    python3 virtualenv-15.0.3/virtualenv.py --system-site-packages -p /usr/bin/python3.4 ~/awscli-ve

    Activate your new virtual environment:

    source ~/awscli-ve/bin/activate

    Install the AWS CLI into your virtual environment:

    (awscli-ve)~$ pip install --upgrade awscli

    To exit your virtualenv:

    deactivate
  • How can I get the size of an Amazon S3 bucket? – Server Fault

    The AWS CLI now supports the –query parameter which takes a JMESPath expressions.

    This means you can sum the size values given by list-objects using sum(Contents[].Size) and count like length(Contents[]).

    This can be be run using the official AWS CLI as below and was introduced in Feb 2014

    aws s3api list-objects --bucket BUCKETNAME --output json --query "[sum(Contents[].Size), length(Contents[])]"
    

    Source: How can I get the size of an Amazon S3 bucket? – Server Fault

  • Use s3cmd to Download Requester Pays Buckets on S3

    List files under pdf:

    $ s3cmd ls --requester-pays s3://arxiv/pdf
                           DIR   s3://arxiv/pdf/
    

    List files under pdf:

    $ s3cmd ls --requester-pays s3://arxiv/pdf/\*
    2010-07-29 19:56 526202880   s3://arxiv/pdf/arXiv_pdf_0001_001.tar
    2010-07-29 20:08 138854400   s3://arxiv/pdf/arXiv_pdf_0001_002.tar
    2010-07-29 20:14 525742080   s3://arxiv/pdf/arXiv_pdf_0002_001.tar
    2010-07-29 20:33 156743680   s3://arxiv/pdf/arXiv_pdf_0002_002.tar
    2010-07-29 20:38 525731840   s3://arxiv/pdf/arXiv_pdf_0003_001.tar
    2010-07-29 20:52 187607040   s3://arxiv/pdf/arXiv_pdf_0003_002.tar
    2010-07-29 20:58 525731840   s3://arxiv/pdf/arXiv_pdf_0004_001.tar
    2010-07-29 21:11  44851200   s3://arxiv/pdf/arXiv_pdf_0004_002.tar
    2010-07-29 21:14 526305280   s3://arxiv/pdf/arXiv_pdf_0005_001.tar
    2010-07-29 21:27 234711040   s3://arxiv/pdf/arXiv_pdf_0005_002.tar
    ...
    

    Get all files under pdf:

    $ s3cmd get --requester-pays s3://arxiv/pdf/\*
    

    List all content to text file:

    $ s3cmd ls --requester-pays s3://arxiv/src/\* > all_files.txt
    

    Calculate file size:

    $ awk '{s += $3} END { print "sum is", s/1000000000, "GB, average is", s/NR }' all_files.txt
    sum is 844.626 GB, average is 4.80447e+08
    
  • redhat – No ruby-devel in RHEL7? – Stack Overflow

    This answer comes by way of piecing together bits from other answers – so to the previous contributors…thank you because I would not have figured this out.This example is based on the RHEL 7 AMI (Amazon Managed Image) 3.10.0-229.el7.x86_64.So by default as mentioned above the optional repository is not enabled. Don’t add another repo.d file as it already exists just that it is disabled.To enable first you need the name. I used grep to do this:grep -B1 -i optional /etc/yum.repos.d/*above each name will be the repo id enclosed in [ ] look for the optional not optional-sourceEnable the optional repo:yum-config-manager –enable Refresh the yum cache (not sure if this is necessary but it doesn’t hurt):sudo yum makecacheFinally, you can install ruby-devel:yum install ruby-develDepending on your user’s permissions you may need to use sudo.

    Source: redhat – No ruby-devel in RHEL7? – Stack Overflow

  • Mount EBS Volumes To EC2 Linux Instances

    View all available volumes:

    $ lsblk
    NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    xvda    202:0    0   10G  0 disk 
    ├─xvda1 202:1    0    1M  0 part 
    └─xvda2 202:2    0   10G  0 part /
    xvdf    202:80   0  3.9T  0 disk 
    
    $ file -s /dev/xvdf
    /dev/xvdf: data
    

    If returns data it means the volume is empty. We need to format it first:

    $ mkfs -t ext4 /dev/xvdf
    mke2fs 1.42.9 (28-Dec-2013)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    262144000 inodes, 1048576000 blocks
    52428800 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=3196059648
    32000 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
        102400000, 214990848, 512000000, 550731776, 644972544
    
    Allocating group tables: done                            
    Writing inode tables: done                            
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done       
    

    Create a new directory and mount it to EBS volume:

    $ cd / && mkdir ebs-data
    $ mount /dev/xvdf /ebs-data/
    

    Check volume mount:

    $ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/xvda2       10G  878M  9.2G   9% /
    devtmpfs        476M     0  476M   0% /dev
    tmpfs           496M     0  496M   0% /dev/shm
    tmpfs           496M   13M  483M   3% /run
    tmpfs           496M     0  496M   0% /sys/fs/cgroup
    tmpfs           100M     0  100M   0% /run/user/1000
    tmpfs           100M     0  100M   0% /run/user/0
    /dev/xvdf       3.9T   89M  3.7T   1% /ebs-data
    

    In order to make it mount automatically after each reboot, we need to edit /etc/fstab, first make a backup:

    $ cp /etc/fstab /etc/fstab.orig
    

    Find the UUID for the volume you need to mount:

    $ ls -al /dev/disk/by-uuid/
    total 0
    drwxr-xr-x. 2 root root 80 Nov 25 05:04 .
    drwxr-xr-x. 4 root root 80 Nov 25 04:40 ..
    lrwxrwxrwx. 1 root root 11 Nov 25 04:40 de4dfe96-23df-4bb9-ad5e-08472e7d1866 -> ../../xvda2
    lrwxrwxrwx. 1 root root 10 Nov 25 05:04 e54af798-14df-419d-aeb7-bd1b4d583886 -> ../../xvdf
    

    Then edit /etc/fstab:

    $ vi /etc/fstab
    

    with:

    #
    # /etc/fstab
    # Created by anaconda on Tue Jul 11 15:57:39 2017
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    UUID=de4dfe96-23df-4bb9-ad5e-08472e7d1866 /                       xfs     defaults        0 0
    UUID=e54af798-14df-419d-aeb7-bd1b4d583886 /ebs-data               ext4    defaults,nofail 0 2
    

    Check if fstab has any error:

    $ mount -a
    
  • How to Get the Size of an Amazon S3 Bucket?

    aws s3 ls --summarize --human-readable --recursive s3://bucket-name/
    

    See more at AWS docs