Posted on

Building a Template Image using Packer and Ansible

In this guide, we will walk through the process of building template images using Packer and Ansible. We will be using a pre-configured repository that contains all the necessary files and configurations to streamline the build process. By the end of this guide, you will have a solid understanding of how to build images for different providers using Packer and Ansible.

Prerequisites

Before we begin, ensure that you have the following software and requirements met:

sudo apt install zfsutils-linux

Note: For Bhyve Images, This setup could also work for other operating systems that support ZFS, VirtualBox, Packer, and Ansible.

Step 1: Clone the Repository

  1. Open a terminal or command prompt.
  2. Navigate to the directory where you want to clone the repository.
  3. Run the following command to clone the repository with submodules:
    git clone --recursive https://github.com/STARTcloud/vagrant_box_template_creator
  4. Change into the cloned repository directory:
    cd vagrant_box_template_creator/builder

Step 2: Configuration

  1. Navigate to the definitions folder:
    cd definitions
  2. Create a cloud-credentials.json file based on the cloud-credentials-example.json file:
    cp cloud-credentials-example.json cloud-credentials.json
  3. Open the cloud-credentials.json file in a text editor and fill in the necessary secrets for pushing images to cloud repositories. Replace the placeholder values with your actual credentials.
  4. Review the vendor.json file and update it with your organization-specific details, such as product URL, vendor URL, vendor name, and vendor domain.
  5. Choose the desired operating system template from the definitions/templates folder. In this example, we’ll use debian12-server.json for Debian 12.

Step 3: Building the Base Image

  1. Open a terminal and navigate to the root directory of the cloned repo:
    cd vagrant_box_template_creator/builder
  2. Run the following command to build the base image using Packer and Ansible-Local:

    packer build -var-file='definitions/cloud-credentials.json' -var-file='definitions/vendor.json' -var-file='definitions/templates/x64/debian12-server.json' tasks/build-ansible-local.json

    This command uses the build-ansible-local.json file as the main Packer build file and incorporates variables from debian12-server.json, vendor.json, and cloud-credentials.json.
  3. Packer will start the build process and use VirtualBox to create the base image. The build process will take some time, and you can monitor the progress in the terminal.
  4. Once the build is complete, the base image will be stored as an OVA file in the temp directory.

Step 4: Creating Provider-Specific Images and Uploading to Vagrant Cloud

After building the base image, you can convert it to other formats for different providers and upload them to BoxVault (or Vagrant Cloud) using the following commands:

VirtualBox

packer build -var-file='definitions/cloud-credentials.json' -var-file='definitions/vendor.json' -var-file='definitions/templates/x64/debian12-server.json' providers/virtualbox/publish.json

This command will create a VirtualBox-compatible image using the publish.json file in the providers/virtualbox folder. The resulting image will be stored in the providers/virtualbox/boxes folder and uploaded to Vagrant Cloud.

Zone (Bhyve)

packer build -var-file='definitions/cloud-credentials.json' -var-file='definitions/vendor.json' -var-file='definitions/templates/x64/debian12-server.json' providers/zones/publish.json

This command will create a Zone (Bhyve)-compatible image using the publish.json file in the providers/zones folder. The resulting image will be stored in the providers/zones/boxes folder and uploaded to Vagrant Cloud.

AMI (Amazon Machine Image)

packer build -var-file='definitions/cloud-credentials.json' -var-file='definitions/vendor.json' -var-file='definitions/templates/x64/debian12-server.json' providers/ami/publish.json

This command will create an AMI (Amazon Machine Image) using the publish.json file in the providers/ami folder. The resulting image will be stored in the providers/ami/boxes folder and uploaded to Vagrant Cloud.

Docker

packer build -var-file='definitions/cloud-credentials.json' -var-file='definitions/vendor.json' -var-file='definitions/templates/x64/debian12-server.json' providers/docker/publish.json

This command will create a Docker image using the publish.json file in the providers/docker folder. The resulting image will be stored in the providers/docker/boxes folder and uploaded to Vagrant Cloud.

Accessing Images on Vagrant Cloud

Once the images are uploaded, you can find them under your respective organization on Vagrant Cloud. For example, the Debian 12 server image can be accessed at:

https://portal.cloud.hashicorp.com/vagrant/discover/STARTcloud/debian12-server

Each of these commands uses the respective publish.json file located in the providers folder to build, publish, and upload the image for the specific provider. The publish.json files contain the necessary configuration and provisioning steps for each provider, including the upload to Vagrant Cloud.

Step 5: Customizing the Build Process

The build process can be customized and extended to fit your specific requirements. Here are a few key areas you can explore:

Ansible Playbooks

  • The Ansible playbooks used for provisioning the image are located in the provisioners/ansible/playbooks folder.
  • The main playbook for building the image with Ansible-Local is build-ansible-local-playbook.yml.
  • You can customize the playbook and roles to add additional provisioning steps or modify the existing configuration.

Preseed and Shell Scripts

  • Preseed files for different operating systems and types (server/desktop) are located in the provisioners/preseed folder.
  • Shell scripts for various provisioning tasks are located in the provisioners/shell folder.
  • You can modify or add new scripts to perform additional provisioning tasks specific to your needs.

Temporary Files and Output

  • During the build process, temporary files and output images are stored in the temp folder.
  • The final built images for each provider can be found in their respective boxes folders under the providers folder.

Step 6: Cleaning Up

After the build process is complete and you have obtained the desired images, you can clean up the temporary files and artifacts by running the following command:

packer build -var-file='definitions/cloud-credentials.json' -var-file='definitions/vendor.json' -var-file='definitions/templates/x64/debian12-server.json' tasks/cleanup.json

This command uses the cleanup.json file in the tasks folder to remove the temporary files and artifacts generated during the build process.

Conclusion

Congratulations! You have now learned how to build Packer images using Packer and Ansible based on the provided setup. You can use this knowledge to create custom images for different providers and automate the provisioning process.

Remember to review and update the configuration files, credentials, and templates according to your specific needs. Feel free to explore the different folders and files to gain a deeper understanding of the build process and make any necessary modifications.

If you encounter any issues or have further questions, refer to the official documentation of Packer and Ansible for more information and troubleshooting steps.

Happy building!

Posted on

How to Automate building a docker container using Packer and Ansible from a templated image

Introduction

This guide will help you build a Docker container for the Moonshine-dev application, which is written in Haxe with a Gradle REST API. The container will consist of three layers:

  1. Base Layer: A Debian 12-based image prepared using Packer and Ansible, as referenced here: Building a Template Image using Packer and Ansible
  2. Intermediate Layer: Common provisioning steps to speed up future builds.
    moonshine-dev/base-latest
  3. Application Layer: Installation and configuration of the Moonshine-dev application.
    moonshine-dev/latest

See more here about Image Layers in Docker

Prerequisites

Ensure you have the following software and requirements met:

  • Super.Human.Installer (SHI) Instance: Use a SHI instance, which is a GUI wrapper around Vagrant. This instance is based on a STARTcloud template and already has Ansible installed.
  • Docker: Install Docker on the SHI instance. You can follow the official Docker installation guide for your operating system. or add the role startcloud.startcloud_roles.docker

Step 1: Clone the Repository

  1. Open a terminal in your SHI instance.
  2. Navigate to the directory where you want to clone the repository.
  3. Run the following command to clone the repository with submodules:
    git clone --recursive https://github.com/STARTcloud/vagrant_box_template_creator
  1. Change into the cloned repository directory:
    cd vagrant_box_template_creator

Step 2: Build the Base Layer

The base layer is a Debian 12-based image that serves as a foundation for other applications. It is built using Packer and Ansible.

Base Playbook

The base playbook for all STARTcloud images is as follows, this was referenced in the other article: Building a Template image with Packer and Ansible:

---
- name: "This Playbook Creates the Base Template via Ansible-Local"
  become: true
  gather_facts: true
  hosts: all
  collections:
    - startcloud.startcloud_roles
  roles:
    - role: startcloud.startcloud_roles.dependencies
    - role: startcloud.startcloud_roles.serial
    - role: startcloud.startcloud_roles.cockpit
    - role: startcloud.startcloud_roles.nfs
      vars:
        nfs_exports: []
        nfs_rpcbind_enabled: true
        nfs_rpcbind_state: started
    - role: startcloud.startcloud_roles.ntp
      vars:
        ntp_area: ""
        ntp_cron_handler_enabled: false
        ntp_enabled: true
        ntp_manage_config: false
        ntp_restrict:
          - "127.0.0.1"
          - "::1"
        ntp_servers:
          - "ntp1.prominic.net iburst"
          - "ntp2.prominic.net iburst"
        ntp_timezone: America/Chicago
        ntp_tinker_panic: false
    - role: startcloud.startcloud_roles.motd
      vars:
        add_footer: false
        add_update: true
        remove_default_config: true
        restore_default_config: false
        sysadmins_email: [email protected]
        sysadmins_signature: "STARTCloud Contact Email"
    - role: startcloud.startcloud_roles.cleanup

Build the Base Image

Run the following command to build the base image:
packer build -var-file='definitions/cloud-credentials.json' -var-file='definitions/vendor.json' -var-file='definitions/templates/x64/debian12-server.json' tasks/build-ansible-local.json

This command uses the build-ansible-local.json file to create a VirtualBox VDI file, which is then converted to a Docker image and pushed to both Vagrant Cloud and Docker Hub.

Step 3: Build the Intermediate Layer

The intermediate layer includes common provisioning steps to speed up future builds. It is built using the same Packer script (deploy.json) as the application layer.

Intermediate Layer Playbook

The playbook for the intermediate layer is as follows:

---
- name: "Setup Intermediate Layer"
  become: true
  gather_facts: true
  hosts: all
  roles:
    - name: startcloud.startcloud_roles.setup
    - name: startcloud.startcloud_roles.hostname
    - name: startcloud.startcloud_roles.dependencies
    - name: startcloud.startcloud_roles.service_user
    - name: startcloud.startcloud_roles.sdkman_install
    - name: startcloud.startcloud_roles.sdkman_java
    - name: startcloud.startcloud_roles.sdkman_gradle
    - name: startcloud.startcloud_roles.ssl
    - name: startcloud.startcloud_roles.supervisord

Build the Intermediate Layer

Run the following command to build the intermediate layer:

VERSION=0.0.1 sudo -E packer build -on-error=abort \
  -var-file='definitions/cloud-credentials.json' \
  -var-file='definitions/vendor.json' \
  -var-file='definitions/templates/x64/debian12-server.json' \
  -var "docker_hub_template_repo_name=debian12-server" \
  -var "docker_hub_template_repo_tag=0.0.8" \
  -var "playbook_file=provisioners/ansible/ansible_collections/moonshine/moonshine_roles/playbooks/main-base-playbook.yml" \
  -var "repo_version=base-latest" \
  -var 'repo_name=moonshine-dev' \
  providers/docker/deploy.json

Step 4: Build the Application Layer

The application layer installs and configures the Moonshine-dev application. It uses the intermediate layer as its base.

Application Layer Playbook

The playbook for the application layer is as follows:

---
-
  name: "Generating Playbook"
  become: true
  gather_facts: true
  hosts: all
  vars:
    core_provisioner_version: 0.0.1
    provisioner_name: PackerImageBuilder
    provisioner_version: 0.0.1
    settings:
      hostname: app
      domain: moonshine.dev
      server_id: 500001
      vagrant_user_pass: 'XaVuzq2vRV4fTk'
    debug_all: true
    selfsigned_enabled: true
    haproxy_ssl_redirect: true
    letsencrypt_enabled: false
    service_user: java_user
    service_group: java_group
    service_home_dir: /local/notesjava
    cert_dir: /secure
    installer_dir: /vagrant/installers
    completed_dir: /vagrant/completed
    domino_organization: STARTcloud
    domino_install_dir: /opt/hcl/domino/notes/latest/linux

  collections:
    - startcloud.startcloud_roles
    - startcloud.hcl_roles
    - moonshine.moonshine_roles

  roles:
    - name: startcloud.hcl_roles.domino_vagrant_rest_api
    - name: startcloud.startcloud_roles.haxe
    - name: moonshine.moonshine_roles.moonshinedev_deploy
    - name: startcloud.startcloud_roles.haproxy
      vars:
        haproxy_cfg: /vagrant/ansible/ansible_collections/moonshine/moonshine_roles/roles/moonshinedev_deploy/templates/moonshinedev-haproxy.cfg.j2

Build the Application Layer

Run the following command to build the application layer, note that the repo_name and repo_version would be based off your intermediary image/layer above:

VERSION=0.0.1 sudo -E packer build -on-error=abort \
  -var-file='definitions/cloud-credentials.json' \
  -var-file='definitions/vendor.json' \
  -var-file='definitions/templates/x64/debian12-server.json' \
  -var "docker_hub_template_repo_name=moonshine-dev" \
  -var "docker_hub_template_repo_tag=base-latest" \
  -var "playbook_file=provisioners/ansible/ansible_collections/moonshine/moonshine_roles/playbooks/main-application-playbook.yml" \
  -var "repo_version=latest" \
  -var 'repo_name=moonshine-dev' \
  providers/docker/deploy.json

Step 5: Testing the Image

After building the Docker image, you can test it by running the following command:

docker run -p 80:80/tcp -p 443:443/tcp -p 8080:8080/tcp -i -t startcloud/moonshinedev:latest /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf

or

docker run -p 80:80/tcp -p 443:443/tcp -p 8080:8080/tcp -i -t startcloud/moonshinedev:latest

This command will start the Moonshine-dev application in a Docker container, allowing you to test its functionality.

If you need to debug the docker container, you can access it with the following commands
1. list the docker container and grab its name or id:

sudo docker container ls

2. The you can use exec to access the container:

sudo docker container exec -it <CONTAINER_ID_OR_NAME> /bin/bash

Conclusion

By following this guide, you have successfully built a multi-layer Docker container for the Moonshine-dev application. Each layer serves a specific purpose, from providing a base Debian 12 image to installing and configuring the application itself. This modular approach allows for efficient and flexible image management, making it easier to update and maintain the application over time.

Posted on

Marks Home Media Server

Hey Guys, Wanted to make this easy for ya, here is how you can Download Movies and TV Shows to Plex.

Media Center

If you want to skip downloading something new and watch something already added to the library Go here. You should have login details to access Plex

Movies

You can download movies by accessing Radarr.

To add a Movie click: Add a Movie:

radarr
Enter the name of the movie that you want to download and then click Search and Add:
AddnSearch

You will see a confirmation that the Movie has been added:
Confirmation
Then go to Activity to Verify if it will be Downloaded, depending on the queue, it may take some time to become available in Plex:

Verify

TV Shows

You can download TV Shows by accessing Sonarr.

To add a TV Show click: Add Series:

Enter the name of the TV Show that you want to download and then click Search and Add:

 

You will see a confirmation that the TV Show has been added, Then go to Activity to Verify if it will be Downloaded, depending on the queue, it may take some time to become available in Plex:

Download Queue

To Check how long it will take for the TV Show or Movie to become available to Plex, you will need to access SABNZBD, You may need login credentials if you are accessing this outside of the home network.

Posted on

Migrate SAN Array on OmniOS to a new Host

Migrate SAN Array vDev/luns to another SAN

San Array

Here we show the commands to assist in migrating a SAN Array from one OMNI-OS cluster to another. This will also show how to initialize the new San Array Host.

Target SAN Array Host Old Target SAN Array Host
1121 – OmniOS 1154 – OmniOS
Initiators
1063  – migrated to 1121

1064  – check if migrated

1065  – migrated to 1121

1066  – migrated to 1121

 

 

Definitions:

SAN – A storage area network (SAN) is a network which provides access to consolidated, block level data storage. SAN Array Hosts are primarily used to enhance storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear to the operating system as locally attached devices.

svcadm – SCSI Block Disk command line interface

stmfadm – SCSI target mode framework command line interface

– manipulate service instances

Source SAN host:
pfexec sbdadm list-lu
zpool status
zfs list -rt snapshot
zpool list
zfs list
pfexec zfs snapshot Array-0/hosts/host-xxxx@
pfexec zfs send Array-0/hosts/host-xxxx@
ls -lh host-xxxx
pfexec scp /home/m4kr/host-xxxx m4kr@<hostIP>:/home/
pfexec stmfadm list-hg -v host-xxxx
pfexec sbdadm list-lu
pfexec sbdadm delete-lu GUIDxxxxxxxxxxxxxxxxxxxxxxx
ls -la
pfexec sbdadm list-lu

Target SAN host:
Fresh SAN host:
pfexec stmfadm create-tg FC-0
pfexec stmfadm add-tg-member -g FC-0 wwn.xxxxxxxxxxxxx wwn.xxxxxxxxxxxxx wwn.xxxxxxxxxxxxx wwn.xxxxxxxxxxxxx
pfexec stmfadm list-tg
pfexec stmfadm list-tg -v
pfexec svcadm enable stmf

For Each Target SAN host VM:
zfs list
pfexec zfs recv -d Array-0 < host-xxxx
pfexec sbdadm create-lu /Array\-0/hosts/host\-xxxx/
pfexec stmfadm create-hg host-xxxx
pfexec stmfadm add-view -h host-xxxx -t FC-0 -n 0 GUIDxxxxxxxxxxxxxxxxxxxxxxx
pfexec stmfadm add-hg-member -g host-xxxx wwn.xxxxxxxxxxxxx wwn.xxxxxxxxxxxxx wwn.xxxxxxxxxxxxx wwn.xxxxxxxxxxxxx

This goes in Conjunction with: Set up a Home ESXi mini Lab

https://www.m4kr.net

Posted on

How to password protect a directory in cPanel

At times, you may find it best practice to password protect a folder on your account. This can add an extra layer of protection to files you don’t want the general public to have access to. Password protecting a directory can be easily accomplished using the option within cPanel. We will also provide you the instructions on how to remove the password protection after it has been added.

Understanding how password protecting a directory works

It�s important to understand how password protection on a folder works. When you choose to password protect a directory in cPanel, cPanel creates a rule in your .htaccess file. This rule specifies that the folder is protected and the visitor will need to provide the proper username and password to log in and view the files.

Please keep in mind, when you grant access through password protection, you are not only granting access for that folder, but any subfolders located within it. Also, by password protecting a directory and gain access to any subfolders in that directory you must provide the login credentials to do so.

Steps to Password Protect a Directory

    1. Log into cPanel
    2. Go to the Files section and click on the Directory Privacy icon

pass_1

    1. Select the directory you want to password protect and then you will see the Set Permissions screen appear. Here you can provide a name for the folder you’re trying to protect.

    1. Next, click on the checkbox labeled Password protect the directory. Makes sure you have a name for the folder you are going to protect.

pass_3

    1. Click on Save in order to save the name you have entered for the directory and option to password protect the directory.

pass_4

    1. Create a user to access the protected directory
    2. Click Save in order to save the user that you have edited.

pass_6

Removing the password protection from a directory

The steps to remove password protection on a directory is a fairly quick and simple process. One reason you might want to password protect a directory and then remove the protection is for testing purposes. Or, if you are finally ready to make the folder open to the public, then you can remove the password protection so that everyone can access the files. The instructions for removing the protection are as follows:

  1. Log into your cPanel
  2. Scroll down to the Security section in the cPanel and then click the Password Protect Directories icon. Choose Web Root if you see a pop-up window, and then click Go
  3. Scroll down the folder list until you see the folder you previously password protected. If the folder is a sub-folder to another one, make sure that you click on the folder icon next to the folder name. If you click on the folder name, the interface will think you’re setting protection on that folder. If you do this by accident, simply re-open the password protection interface to get back to the folder list.
  4. When you find the folder that has been password protected, click on the folder name to select it.
  5. Uncheck the box that says “Password protect this directory“.
  6. Click on SAVE in order to save your entries.
Posted on

How to set up a Node Project as a Service in Centos

You will need to create a Service file in, for example: /etc/systemd/system/Monty.service

in this file include the following:

[Unit]
Description=Mikes Monty Example
#Requires=After=mysql.service # Requires the mysql service to run first

[Service]
ExecStart=/bin/node /home/m4kr/public_html/Monty/node_modules/react-app-rewired/scripts/start.js
WorkingDirectory=/home/m4kr/public_html/Monty
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=monty
#User=root
#Group=root
Environment=NODE_ENV=development PORT=3000

[Install]
WantedBy=multi-user.target

 

Posted on

Change root@hostname to different email address

By default, any email sent by system is sent to root@hostname. So critical server errors, log errors, corn jobs alerts e.t.c all are sent to this default email address. To change it to different appropriate email id, we can do this by two ways.

 

By updating email aliases file:

For this example, lets set email to system@mydomain.com

Step 1 : edit /etc/aliases file 

 

$ vi /etc/aliases

 

Add email ids at the bottom of the file.

 

root: system@mydmomain.com

To add multiple email ids, we can simply separate them by comma.

root: system@mydomain.com, linux@mydomain.com

linux@mydomain.com is second email id 

 

Step 2: Run the aliases command, to compile aliases file. 

$� newaliases

Step 3: Restart postfix server.

service postfix restart

 

Second way:

We can simply create .forward file to the folder root and add email address there.

$ vi /root/.forward
system@mydomain.com

Restart postfix server
$ service postfix restart.

 

That’s it. Enjoy!

Posted on

What type of Web Hosting should I go for?

Choosing the Right Type of Web Hosting

Web hosting usually requires:

  1. Web hosting: file storage, Database and bandwidth on a server maintained by the hosting service
  2. A domain name: the address where visitors find your site, ie Google.com, Facebook.com, M4kr.net
  3. Content for your site: the fun/profitable stuff like your shop, blog, and portfolio

We’re going to look at hosting in more detail, so you can compare the types of web hosting available and choose the best fit for your goals, budget, and technical skill level.

You can choose from 3 main types of web hosting

  • Shared hosting
  • Virtual Private Servers (VPS)
  • Dedicated servers

Shared Hosting

Shared hosting places multiple clients on the same server environment, sharing server resources amongst each user, generally there are limits to prevent one user from hogging the entire server. Generally Web Hosts provide Web hosting for Linux as their primary form of hosting.

Types of Shared Hosting

Windows Web Hosting

Generally Windows  based Web Hosting will support:

ASP
PHP
HTML
MS SQL
DNS Management

Linux Web Hosting

Linux Shared hosting based Web Hosting will support:

PHP
MySQL
HTML
DNS Management

Pros of Shared Hosting

Generally the cheapest option, and the most effective for small sites that don’t require much other than simple Web hosting and Email. The Web Host will generally take care of the server and ensure the sites are up and the server is secure.

Cons of Shared Hosting

Because shared hosting means your site shares a server with many other sites, those sites’ traffic volume and security practices can affect you. Generally most hosts load balance their Shared Web Hosting servers to prevent overloading of the server. Generally Shared hosting does not include the ability to make server wide changes, the reasons for this is generally because any server wide changes made, will affect other users, some users may have code that doesn’t play well with the new changes, as such if you need to customize the server environment further for your code to work properly, you may want to look into a VPS or Dedicated server.

Best for…

Simple Websites, with standard HTML, ASP, or PHP based code, generally supports databases and email.

VPS Hosting

VPS stands for Virtual Private Server, just like running an Operating system in Virtualbox on your Home computer. A VPS gives you access to the entire Virtualized Server, thus making it Private and dedicated to you.

Types of VPS Hosting

Windows VPS Hosting

Windows VPS hosting comes stock (meaning no software other than the Operating System has been installed), so as to ensure maximum compatibility with any software the client would want to run on the server.

You generally can choose from the following Windows Operating Systems:

  • Windows Server 2008 2012, 2016
  • Windows Desktop 7, 8 , 10

Some providers may provide Desktop versions of the Windows Operating System as well.

Generally you can install MS SQL and IIS to get a basic Windows based Web Hosting server running, there are Web Hosting control panels for Windows Servers like Plesk.

Linux VPS Hosting

Linux VPS hosting will generally come stock (meaning no software other than the Operating System has been installed), so as to ensure maximum compatibility with any software the client would want to run on the server.

You generally can choose from the following Linux Operating Systems:

  • Debian
  • CentOS
  • Fedora
  • Ubuntu
  • Mint
  • ScientificOS
  • Many others

A Linux VPS will not come with a Desktop, however you can configure them to support running a Desktop interface. Generally with VNC and installing a GUI like Gnome or KDE

You can install MySQL and Apache and PHP to get a basic Linux based Web Hosting server running, there are Web Hosting control panels for Linux Servers like cPanel (Most popular choice), Plesk, Ajenti, Virtualmin.

Pros of VPS Hosting

With a VPS, you don’t have any other users sharing the VPS, this means you can make any changes you want to Apache, IIS, etc so that you can enable custom modules and extensions for your code to work. You can run other applications and code that can’t be run on a Shared Web Hosting service, your provider will not make any changes to your VPS unless you ask them to, or if your VPS causes issues.

Cons of VPS Hosting

Since the server is Virtualized, most providers place many of these on the same Node Server, these node servers are very very powerful servers that were designed for doing just this, however the downside to this is, that in some VPS backend configurations, if another client is attacked, or is acting up in some way, like overusing CPU or networking, it can affect you. Generally most Hosts monitor for any changes and work with any clients that cause issues to fix the issue to prevent this from affecting others.

Best for…

Site owners who want the custom control over their server environment or want to run applications in that don’t run on a Shared Web Hosting service.

 

Dedicated Server Hosting

 

A Dedicated Server is like having your Home PC(Maybe even more powerful than that), but in a datacenter.

Pros of Dedicated Hosting

Since the server is Hardware, and not in a Virtualized instance you have complete control over the server. You do not share anything with any other clients, in most cases not even networking. You can install CPU intensive applications, or if your site needs so much power behind it and customization, a dedicated server allows you to use multiple cores of a server therefore allowing you the max limits of the processor.

Cons of Dedicated Hosting

Dedicated servers just like VPSs are just as difficult to maintain if they are unmanaged. Generally hardware and software updates are automatic on most, but VPSs and Dedicated servers do require more configuration to get set up the way that you want.

Best for…

Established businesses that want to host many sites, implement their own security protocols, handle high traffic volumes, or store huge amounts of data.

Posted on

How to configure timeout settings in FileZilla

Before starting this discussion of how to configure timeout settings in FileZilla, you have to know why this error occurs? Here I have discussed this in detail. So just read on.

Suppose you are uploading a relatively large file or any type of file via your ftp client ex. FileZilla (here I have used FileZilla for explaining this) and every time there showing an error that connection timeout and uploading process starts again for that specific file.

Have you faced this problem? If you haven’t then also it will be helpful information for you. Now you must be thinking what is the actual reason for this problem? Right?

The problem is related to the internal timeout settings of FileZilla. Actually there is a predefined timeout settings by default in FileZilla.

So if your file is relatively large in size or your internet connection speed is not up to the mark, then the uploading process fails within that predefined timeout settings and this type of Connection Timeout error occurs and upload starts again.

Now you will ask, has there any solution to resolve this issue? Obviously there has. You have to just configure Timeout Settings in FileZilla.

Here I have shown you the way of how you can change this settings according to your needs.

Step 1: Open FileZilla client application on your desktop. A new window will open for FileZilla.

Step 2: Then click Edit on menu bar and select Settings option.

Step 3: Then click Settings and a new small Settings window will open. In the left side of the Settings window there is a subsection  called Select page, select the top option calledConnection (by default it is selected when you open Settings window if it is not then select it manually).

When you click on the connection link you will get an option to set timeout on the right side. In that Timeout section you will get an option to set timeout value from 0-599 seconds.

Set this according to your need or you can disable this Timeout settings by settings its value to 0.

Step 4: Now click OK.

You are done. Now start re uploading your files.