Mediumcube.com Blog Information about technologies at Mediumcube and our services

July 5, 2013

How to rename MSSQL database MDF and LDF files

Filed under: Technical — admin @ 12:31 pm

This is a step by step tutorial on how to change the MS SQL databse file names (file.mdf and file.ldf) or their location. The items in red need to be replaced with the names used in your database. This has been tested on SQL 2005/2008/2012.

The method will work for renaming the whole DB, its files and logical units  or you can use Steps 3-6 only to change the location of the .mdf and .ldf files:

1) Rename the actual database MYDB ==> MYDBold in the SSMS (SQL Studio Manager) (This is only necessary if you want to rename the actual DB as well)

2) Open Query window for the MYDBold and Change the logical names in the DB. These names can be found from right clicking on the DB -> Properties -> Files tab:
ALTER DATABASE MYDBold MODIFY FILE (NAME = MYDB_data, NEWNAME = MYDBold_data);
ALTER DATABASE MYDBold MODIFY FILE (NAME = MYDB_log, NEWNAME = MYDBold_log);

3) Alter files for DB, changing to the new .mdf and .ldf file location:
GO
ALTER DATABASE MYDBold MODIFY FILE (NAME =MYDB_Data, FILENAME = ‘D:\SQL_Data\MYDBold.mdf‘)

GO

ALTER DATABASE MYDBold MODIFY FILE (NAME = MYDBold_log, FILENAME =’D:\SQL_Data\MYDBold.ldf‘)

GO

4) Take the DB offline in SSMS (Right click on the DB -> Tasks -> Take Offline). (If DB takes very long to go offline, you can try to Detach with Drop/Update checked). This will allow the DB to go offline.

5) Change the actual physical file names on the hard drive

6) Bring the DB back online, and whola! Your DB is now attached to the new storage device

November 4, 2012

Cloud vs Shared Hosting

Filed under: General — admin @ 5:09 pm

Since beginning in the hosting business back in 2001, alot has changed. Instead of shared hosting, we now call it cloud hosting. In addition to dedicated server, we now days hear alot about virtualized servers. Instead of websites, we now have applications.

There is alot of hype with any technology that comes along, but the main concepts remain the same. They are either shared or dedicated resources. There is no doubt about it that the cloud in its basic form is a shared resource. Applications are sharing the server running them. Thus Processors, Memory and Disks are all shared among all the users of the system. These users are typically spread among many servers. The term ‘cloud’ is another twist on the term ‘shared’.

However, consider this scenario User A on Server 1  and User B also residing on Server 1. If User B starts using alot of CPU and Disk resources, then User A will be affected regardless of whether this is a shared hosting or cloud hosting.

The only exception are virtualized servers which have their own dedicated memory resources and most of the time dedicated CPU cycles. However, even virtualized servers suffer from the shared disk problem. If the SAN (Storage Area Network) or Storage Appliance that are used by the servers degrade in performance, then everyone connected to that shared resource will be affected.

The one last option is what the dedicated cloud. This is a set of hardware (storage/firewall/phsyical servers) dedicated for your environment. Thus, the risk of perofrmance degradation is only limited to the type of servers and applications running within that dedicated cloud.

In our environment, we focus on segregated cloud hosting. We split our resources among physical servers with dedicated storage resource for each server. Ensuring that each server is loaded with certain number of virtual machines that each one will get its fair share without inconveniencing the other users on the system. If an application consumes more than its fair share of resources, that virtual machine is moved over to a server with more dedicated resources for it.

Due to the fact we do divide our cloud into segmented physical servers, the likely chance of a server over loading the other machines is very slim compared to a SAN system that has hundreds or perhaps thousands of users.

In our cloud or shared hosting, reliability, performance and security come first.

April 17, 2011

Monitor Utility for 3ware / LSI RAID on Linux

Filed under: General — admin @ 7:21 pm

For those seeking a small and efficient script to monitor 3ware RAID array through command line, here is a small script on how to do it. The script was initially posted on cpanelconfig.com but the site no longer exist. So we wanted to help those looking for the same thing.

For liability concerns, this post is provided AS IS. Proceed at your own risk. Though we’ve tested this script, we can’t guarantee it to function 100% all the time, nor are we responsible for any issues that may arise of using this script.

The script will allow you to monitor the RAID array and receive notifications when the array is down. The script consist of 3 items:

raid-health.sh : file that contains the check RAID script and send email
raid-health-body.txt: body message of the sent email
tw_cli: This is the tool available from 3ware to run CLI (Command Line Interface) through shell. This must be downloaded to a folder

First thing is the raid-health.sh script as follow, you can place it anywhere on your system (If you copy/paste through PuTTy, watch for changed quotation marks):

#!/bin/bash
com=`/home/path/to/script/tw_cli info c0 u0 status | awk ‘{print $4}’`
echo $com
if [ “$com” != “OK” ];
then
mail -s “RAID Warning Subject Line
email@yourdomain.com < /home/path/to/raid-health-body.txt
fi
The lines in red indicate a change is required. These are the paths, email addresses.

Second, create raid-health-body.txt file or copy the following text to a file named raid-health-body.txt:

RAID is DEGRADED on someservername.server.com

Make sure both raid-health.sh and raid-health-body.txt files are CHMOD 644 or 755 so the cron job can execute them.

Last, add the following line to your cron job (typically root) under /var/spool/cron :

0 8,19 * * * /home/path/to/raid-health.sh > /dev/null 2>&1

The above will check the RAID staus at 8 am and 7 pm. Save the cron job and watch for the emails when the RAID fails.

Tips:
===
– The above script was altered to send emails when the array status is NOT OK. This means if the array is in Verify or Rebuild mode, it will send an email. Preferred to schedule the cron to run outside of verify periods. If you wish to change this behavior, simply replace the line:

if [ “$com” != “OK” ]; with      if [ “$com” = “DEGRADED” ];


– It is better to test the script before adding it to the cron. Do that by running the file raid-health.sh

– If you get any syntex errors, check the quotation marks on the script to match the one written above.

– The line /tw_cli info c0 u0 status   means check controller #0 , Logical Unit # 0. If you have multiple controllers or multiple logical units, the above need to be changed.

March 27, 2011

Secure folders requiring 777 CHMOD Permission

Filed under: General — admin @ 7:06 pm

We have all been in this situation where certain folders require full read/write access by the web server.Unfortunately, if the script utilizing the world writeable folder is insecure; it may allow external users to upload malicious content to folders CHMOD 777.  Though this issue can typically be mitigated by using suPHP and Apache which will run each PHP process under the user executing the file. However, in some instances using suPHP may not be an option.

The following lines can be added to .htaccess in the folder which requires 777 (rwx-rwx-rwx). Note, the example below blocks access HTTP files in that folder directly through GET/POST, but does not prevent using “include” functions within a script to parse the files:

Options All -Indexes
<FilesMatch “\.(php*|cgi|c|txt|s?p?htm*|pl|exe)$”>
Order Deny,Allow
Deny from all
</FilesMatch>
<Limit PUT DELETE>
order deny,allow
deny from all
</Limit>

The example above blocks calling the following extensions through HTTP: .php/.cgi/.c/.txt/.phtml/.shtml/.html/.htm/.pl/.exe

The line next to it for LIMIT PUT DELETE will prevent executing upload calls and delete calls on the virtual folder through HTTP

January 17, 2010

Critical Internet Explorer Vulnerability

Microsoft has issued an advisory advising of a 0-Day Exploit in Internet Explorer 6, 7 and 8 which could allow a remote attacker to install a trojan virus on a user system without their knowledge.

Microsoft is working on a solution, but has not  released one as of the time of writing this post. Due to the fact an already exploitable code is available on the web with no path in sight, it is highly recommended not to visited untrusted sites or to use a different browser than Internet Explorer until a patch is released.

More information can be found at:
http://www.microsoft.com/technet/security/advisory/979352.mspx

Possible workaround released by Microsoft (Enabling DEP):
http://support.microsoft.com/kb/979352

This is a very serious security threat. From the reports available, Anti-Virus softwares are not sufficient protection against this weakness.

UPDATE: Microsoft has released a patch for the the above vulnerability, however, a new exploit has been found with the same severity. Therefore, we highly recommend not to use Internet Explorer for the time being. The new advisory: http://www.microsoft.com/technet/security/advisory/980088.mspx

January 3, 2010

Secure Windows Servers using IPSec Firewall

IPSec (Internet Protocol Security) is a cross-platform protocol for securing IP communications through authentication and encryption. IPSec operated on Layer 3 (Network Layer). Thus makes it a powerful tool for managing how traffic flows over a network.

The IPSec firewall rules are available for download Clicking Here.

IPSec was first introduced within Microsoft products in Windows 2000 and have been improved since. The main purpose for IPSec in Windows Servers 2000/2003/2008 is to secure traffic between clients and domains, domains and domains. However, another great benefit arises for IPSec is the flexibility to act as a software firewall on the Windows platform. Windows 2003 firewall is very weak and inflexible that most of the times it end up being disabled on servers. This is very dangerous for Windows web hosting servers, and more specifically for anyone running DC over the internet with no proper firewall in place.

One of the major falls of the Windows Firewall is its inability to filter out the same port more than once. Thus, if we need to block traffic to port 1433 (MSSQL Port), but allow only two specific IP addresses in two different networks to access the 1433 port, that is not possible within the 2003 version of Windows Firewall.

The second major fall is the inability to differentiate inbound/outbound traffic. For example, we’re unable to filter out connections to a specific external network address on all ports. We experience the same problem when attempting to allow incoming connections on all ports for a specific IP. This is not to mention the numerous times when the firewall caused network traffic issues especially with earlier version of Windows 2003 SP2.

Windows 2008 / 2008 R2 improves on Windows Firewall by adding Inbound and Outbound rules. Also, it now allows to allow specific subnets or IPs full access to all ports. The 2008 version of the Firewall acts almost like an IPSec based firewall. Yet, when we have a mixed environment of 2003/2008 servers, we’ll want to have firewall services running on both servers. Making IPSec becomes the only reliable option.

In this article we’ll describe how to access IPSec as well as provide sample IPSec firewall rules. IPSec can be managed through a GUI (Local Security Policies) or a cmdlet (netsh). For our purpose, the steps are identical for Windows 2000/2003/2008. The 2008 IPSec version has one little difference. You can now copy/paste the IP address into the IP field, while the 2000/2003 requires manually typing the IP address. IPSec firewall can also be setup on Windows XP Professional.

The IPSec Snap-in is available from:
Start -> (Settings) Control Panel -> Administrative Tools -> Local Security Policy

Alternative way to open Local Security Policy is through:
Start -> Run -> type: secpol.msc

This will launch the Local Security Settings/Policies Snap-in. The IPSec rules are available under the section “IP Security Policies” on the left side.

NOTE: On a Domain Controller, you want to utilize IPSec under the Domain Controller Group Policy if you wish to secure your DCs.

IPSec Console Mediumcube

You’ll notice on the right side, IPSec lists the current policies available on a system. You can only have one policy active. If you Right-Click on any of the rules at the right, you’ll notice the option to select “Assign”. Assigning a policy makes it active.

Now, we need to import our sample firewall settings. Note, we can export/import policies by Right-Clicking on the “IP Security Policies” -> All Tasks on the left side of the screen. Importing new policies with different names do not affect our current setup policies. Furthermore, anytime we experience issues with policies, you can restore the IPSec back to the default system policies.

Once we’ve downloaded the firewall rules, unzip them and place them somewhere on our computer. Then head back to the IPSec snap-in,  right click on “IP Security Policies” on the left side -> All Tasks -> Import Policies  and point to the location of the IPSec rules we’ve just unzipped.

The firewall rules are disabled by default, so we need not be concerned being locked out when they are imported. Our IPSec snap-in should look like this:

Mediumcube Hosting

We’ll notice a new rule has been added to our list named “Network Firewall”. If we click on “Network Firewall”, we’ll notice that the policy is not assigned (There are On/Off switches at top for toggling Assigned status)

Double click on the “Network Firewall” rule will bring up the Properties screen:

IPSec Firewal Properties

Notice at the top is “IP Filter List”, click on that tab to sort rules by name. The firewall logic is very simple. It begins by denying all icnoming/outgoing traffic on all ports to any connection. Then we use rules to allow specific Ports and Exemption lists access as desired. The following are brief description of how these rules function:

1-DENY UDP ALL: Denies ALL Inbound and Outbound UDP Connections

2-DENY TCP ALL: Denies ALL Inbound and Outbound TCP Connections

3-DENY BAD IP: Explicitely denies access to IPs/Subnets even to already opened ports (Overwrites the allowed ports rules)

4-EXEMPTIONS: We specify here all IPs/Subnets that we wish to give explicit access to ALL Ports regardless of which ports are blocked. (Overwrites the Deny TCP/UDP Rules)

The rest of the rules are indicative of which ports to open. If there is a check mark next to a rule, it means the rule is active and the port is enabled to everyone except those IPs in the 3-DENY BAD IP list.

If there is no check mark next to a rule, it means the port is blocked to everyone except the 4-EXEMPTIONS list.

We’ll notice some of the rules indicate Server/Client access. This is necessary since we’ve blocked both incoming/outgoing ports. Thus, if we need to connect to SSH server from our Windows machine, we need to enable SSH Client. However, if we need external clients to connect to our Windows server using SSH, we need to enable SSH Server. Basically, Client means allow Outgoing connection while Server means allow incoming connection.

We’ve by default enabled the most popular ports: HTTP/HTTPS, SMTP, POP3, MSSQL, RDP. Please review the firewall lists before deciding to enable the Network Firewall policy.

When we’re ready to activate the Network Firewall. From the IPSec snap-in, right click on the “Network Firewall” rule, and select “Assign”:

Assign Network Firewall IPSec

This will enable the firewall on the current system. If for any reason we wish to disable the firewall, right-click on the “Network Firewall” rule again, and click on “Un-Assign”.

NOTE: The Network Firewall rules are very aggressive and may not be suitable for all situation. By default, the firewall will block all incoming/outgoing traffic except to those explicitely allowed ports/IPs. This may cause some issues with FTP access and connecting to outside networks on special Ports. If we require connecting to certain outgoing ports, we must add a rule within the Network Firewall to allow such connection.

We do not provide any warranties or guarantees for the use of these firewall rules. Use at your own risk. However, we’ve been using them on our network for many years without any issues.

If you have any questions or feedback, feel free to post your comments.

October 14, 2009

Service interruption issues with Montana Windows Server

Filed under: Technical — admin @ 5:51 am

 

We’re currently experiencing service interruption issues with the montana shared hosting Windows 2008 server. The problem started after Microsoft security updates were applied which prevented the system from launching into the normal mode.

We have recent backup of your mail, database and site data, however, we’re working with Microsoft specialist team to try and revive the server ASAP.

Update:10:30 AM EST (GMT-4) :  We’re working with Microsoft on resolving this problem at the moment. Further update will follow whether this becomes successfull or not.

Update: 11:50 AM EST (GMT-4): The problem has been resolved and all services restored back in operation. No data were lost or missed. We’re discussing with Microsoft the nature of this problem and hope to update you with further details soon.

March 29, 2009

Secure Your Password and avoid being hacked

We’ve been noticing over the past few months that some of our clients had their sites compromised and were later used to send out spam or distribute malicious content to sites visitors through password leaks.

There appears to be no pattern connecting the compromise to each others, except all of them indicate the hacker had gained accessed to the sites through FTP access. The hackers used the account holder username/password to login to the site and manually or through an automated script upload/replace site files.

Our investigation have revealed that these hacks are not limited to a certain OS, Control Panel or Server. They would occur to some of our direct clients and at some instances to clients of our orresellers.

Further investigation confirmed there were no server-wide compromise. There is no indication of root compromise, file integrities are intact and no rouge users or scripts were found on the physical servers.

After carefully analyzing the logs for few weeks, and running traces on the hackers, we’re confident that these attacks were only successful through a user/password compromise of the hacked site.

It appears the hackers are using a KeyLogger malware to sniff user/pass information on clients local stations. Then use these information to login to the victim site through FTP to upload their malicious content.  Once the password of the account is changed on our end, the hack stops.

We would highly recommend to all of our clients to check if their workstations are compromised even if they’re running an Anti-Virus software. We also ask that you ensure your password is not shared over the public Internet such as Messengers, Emails. Additionally, please verify your password meets the complexity rules stated in section (8) of this email.

The hackers can upload keyloggers and data sniffers to your local workstations through many methods including a security weakness in a software you run on your system such as Internet Explorer, FireFox, Windows Media Player, QuickTime Player, Outlook, Office, Password guessing or Password dictionary attack. To help you protect yourself from such attacks, we’ve prepared few recommendations to keep your computer system secure:

1) Never share your password with any parties, and always create different passwords for different sites

2) Be extremely careful when working on a remote system or a system that is shared with others. We don’t recommend that you use a shared system to login to sensitive websites. There is a chance that a shared system may contains a password hijacker program, or be on a rogue network.

3) If you share your password with 3rd parties, please ask them to follow these steps as well.

4) Before changing your passwords, please ensure your system is clean of viruses. There is no point of changing passwords if the system you’re working on is already compromised. These are few suggestion on how to scan your system for viruses on the Microsoft Platform:

a) Download, Install and run Malwarebytes from https://www.malwarebytes.org/

b) Download, Install and run Microsoft Security Essentials from http://windows.microsoft.com/en-us/windows/security-essentials-download

c) If malicious content is found using the above two systems, we recommend that you do more vigorous checking for hidden malwares using Dr. Web CureIt: http://www.freedrweb.com/cureit/

d) You can also do further checking using Prevx http://www.prevx.com/freescan.asp

e) For Advanced users, you can also try Microsoft Rootkit Revealer to show any hidden content at: http://technet.microsoft.com/en-ca/sysinternals/bb897445.aspx (Please note, Rootkit Revealer may generate false positives)

f) Lock down your Windows using EMET (Enhanced Mitigation Experience Toolkit) from: http://www.microsoft.com/en-us/download/details.aspx?id=41138

If you’re not currently using an Anti-Virus and Anti-Spyware, we would urge you to purchase one soon. In the mean time, you can try these free real-time scanning alternatives: http://free.avg.com/ or http://www.avira.com

We also strongly encourage you to check and install the latest Windows Security Updates from Microsoft: http://windowsupdate.microsoft.com/. Additionally, you can use the following tools to check for any out-of-date applications installed on your system:

– MBSA from Microsoft: http://www.microsoft.com/downloads/details.aspx?FamilyID=F32921AF-9DBE-4DCE-889E-ECF997EB18E9

– Secunia Scanner: http://secunia.com/vulnerability_scanning/

5) If your virus scanner finds any malicious content that is rated Medium-High, please advise us immediately. We’ll change your password from our end.

6) Even if you run an up-to-date virus scanner, we do urge you to run multiple scans using the instructions above. Sometime real-time scanning is unable to catch viruses spread through a web browser, or its signature database may not be up to date.

7) Once you’ve confirmed your local machine is safe, check for other machines within your local network to ensure no infection spreads from one machine to another using USB keys, network file sharing.

8 ) Ensure your password is complex enough. The ideal password will be at least 8 characters long, contains both Upper and Lower case characters, a number and a special character.

You can use Microsoft online password checker to verify your password strength. A level of Strong or above would be ideal: http://www.microsoft.com/protect/yourself/password/checker.mspx

If you’re using the default password which was sent to you when your hosting account was created, please change it immediately. The Control Panel interface offers a handy password generation utility.

9) It is always preferred that you use secure connections when transmitting password information online. This includes not logging to any systems or sites that do not support encryption. Our servers will allow you to connect securely for FTP, cPanel, SMTP, POP3 access, as follow:

– We support Auth TLS FTP connections
– You can login securely to your cPanel interface through https://enterYourSiteName.com/cpanel/ , you may be presented with a security certificate warning, please accept it to continue.
– You can access secure SMTP on the same port as your regular SMTP connection (Port 25 or 26)
– You can access secure POP3 on Port 995 which is set by default in Outlook when checking “This server require secure connection (SSL)”

Please note, using SSL connections will result in slower speed and may cause timeouts. Using SSL will also display a warning advising you to accept the server certificate. This is an inherent limitation of shared SSL certificates.

We hope  these information will be of great value and help you maintain a safe and secure online presence.

January 13, 2009

Building Secure Software – 25 Tips from SANS

Filed under: Technical — admin @ 10:34 pm

The SANS institute in collaboration with different software vendors, academics, security analysts and NSA had compiled a list of the Top 25 most dangerous software mistakes.

The list contains information on the steps that allow a hacker to compromise a piece of code. This ranges from the most obvious and well kwn no Injection Attacks to the more trivial but widely ignored data encryption and hard-coded passwords.

Any Software developer will find these information very valuable. Insecure software is becoming quickly a huge obstacle in the way of advancing online communications. The list is available at:

http://www.sans.org/top25errors/

Here is a mirror copy: cwe.mitre.org/top25/

This page contains reviews of security auditing software: http://www.sans.org/whatworks/

 

December 22, 2008

Block Certain Users from Accessing your website

Filed under: General — admin @ 2:39 pm

Many of us have websites dedicated to certain markets or countries. However, due to the Internet being a global space, we’ll often end up delivering traffic to users that may not be within our target audience.

At other times, we get annoyed by certain users on our websites that usually wreck havoc by either spamming the forum/blog we’re running or cause other annoyances to the site visitors.

There is a handy tool that we can use to filter out the visitors we don’t want while maintaing site accessibility to the others. It is called “.htaccess” file. The .htaccess file if placed inside your web root folder (typically public_html), it can simplify the way you manager your website.

For instance, to block a certain IP address from accessing our website, we would create a file called: .htaccess and save the following content inside it:

<Limit GET HEAD POST>
order allow,deny
deny from 41.204.224.50
allow from all
</limit>

The above code would block the user on IP address: 41.204.224.50 from accessing your web page. Obviously, you’ll have to remember that most users on the internet are assigned temporarily IP addresses that can change from time to time. So block a single IP address will likely not permanently block that user.

Nevertheless, this idea can be further expanded to block certain regions or countries in the world from accessing your website. Lets say you’re having some trouble with users visting your website from Nigeria. You wish to block access to your website from anywhere in Nigeria. First, you’ll need to obtain list of IP address blocks used in Nigeria. A very handy site for that is http://blockacountry.com which offers as comprehensive of a list as possible which associates IP addresses to countries. NOTE: This list may be incomplete or may contain errors. You should be very careful when blocking a whole country as that may include innocent users that reside outside of that country who may be leasing IP addresses in that same network block.

Follow the steps on the blockacountry.com page which will generate your .htaccess code. Copy/Paste that code into your .htaccess file, and upload it to your Unix based server. This will deny access to any site visitor coming from the black-listed IP addresses.

For our Shared Unix users on cPanel servers, there is a simple way to add an IP address to your block list. This is typically available under your cPanel control panel interface -> Security section -> IP Deny Manager icon.

« Newer PostsOlder Posts »

Powered by WordPress