Развитие продукта XenServer оказалось все компании Citrix и теперь отдано сообществу - http://xenserver.org/. Страница свободного гипервизора Xen - http://xenproject.org/. Все платные функции теперь доступны за 0 рублей. В платной версии осталась только поддержка продукта и обновление через GUI. Дальнейшая стратегия развития решения лежит в поле open source.
Далее приведу выдержки из документов release notes и technical FAQ с сайта Citrix для удобства поиска информации.
Источники
64-bit Xen Hypervisor
Active Directory Integration
Role-Based Administration and Audit Trail
VMware to XenServer Conversion Utilities
Multi-Server Management with XenCenter
Live VM Migration with XenMotion™
Live Storage Migration with Storage XenMotion
Dynamic Memory Control
Host Failure Protection with High Availability
Performance Reporting and Alerting
Mixed Resource Pools with CPU Masking
GPU Pass-Through for Desktop Graphics Processing
IntelliCache™ for XenDesktop Storage Optimization
Live Memory Virtual Machine Snapshot and Revert
OpenFlow Distributed Virtual Switch
Integrated Multi-site Recovery
What is the difference between XenServer and the open-source Xen
What is Dynamic Memory Control (DMC)?
XenServer DMC works by automatically adjusting the memory of running VMs, keeping the amount of memory
allocated to each VM between specified minimum and maximum memory values, guaranteeing performance
and permitting greater density of VMs per server.
Without DMC, when a server is full, starting further VMs will fail with "out of memory" errors: to reduce the
existing VM memory allocation and make room for more VMs you must edit each VM's memory allocation and
then reboot the VM. With DMC enabled, even when the server is full, XenServer will attempt to reclaim memory
by automatically reducing the current memory allocation of running VMs within their defined memory ranges.
Далее приведу выдержки из документов release notes и technical FAQ с сайта Citrix для удобства поиска информации.
Источники
- http://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/xenserver-technical-faq.pdf
- http://www.citrix.com/content/dam/citrix/en_us/documents/products-solutions/citrix-xenserver-core-feature-matrix.pdf
Функциональные особенности XenServer 6.2
What is the difference between XenServer and the open-source Xen
Project™ Hypervisor?
The Xen Project™ hypervisor is used by XenServer. Xen technology is widely acknowledged
as the fastest and most secure virtualization platform in the industry.
In addition to the open-source Xen Project™ hypervisor, Citrix XenServer includes:
Control domain (dom0)
XenCenter – A Windows client for VM management
VM Templates for installing popular operating systems as VMs
Enterprise level support
What is the Control Domain (dom0)?
The Control Domain, also called Domain 0 or dom0, is a secure, privileged VM that runs the
XenServer management toolstack known as xapi. Besides providing XenServer
management functions, dom0 also runs the physical device drivers for networking, storage
etc.
What is the difference between the free edition and licensed edition of
XenServer?
Citrix XenServer is available to all users. At a feature and functionality level, the only
difference is that free Citrix XenServer users will not be able to use XenCenter for automated
installation of security fixes, packaged updates, bug fixes and maintenance releases. Free
Citrix XenServer does include XenCenter for server management, but not patch
management.
Do I need a system with a 64-bit x86 processor to run XenServer?
Yes, either an Intel VT or AMD-V 64-bit x86-based system with one or more CPUs is
required to run all supported guest operating systems.
Do I need a system with hardware virtualization support for running
Microsoft Windows guest operating systems?
Yes. To run Windows operating systems, you need a 64-bit x86 processor-based system
that supports either Intel VT or AMD-V hardware virtualization technology in the processor
and BIOS.
Can I run XenServer on a system without hardware virtualization
support?
Yes, but you will be limited to Linux-based paravirtualized guests.
What does the AMD-V and Intel VT technology do?
The hardware virtualization technology in AMD-V and Intel VT-enabled processors allow the
Xen hypervisor to efficiently handle certain virtualization-unsafe x86 instructions that a VM
might call during its normal course of operation. In first-generation virtualization systems,
complex software layers had to emulate the VM’s machine code and rewrite unsafe x86
instructions in real-time. The Intel VT and AMD-V hardware virtualization technologies allow
the hypervisor to automatically trap these instructions which vastly improve virtualization
performance for Windows guests.
What is the maximum size of memory that XenServer can use on a host
system?
XenServer host systems can use up to 1TB of physical memory.
How many processors can XenServer use?
XenServer supports up to 160 logical processors per system.
Note: The maximum number of logical processors supported differs by CPU. Consult the
XenServer Hardware Compatibility List (HCL) for more details.
How many Virtual Machines can run on XenServer concurrently?
The maximum number of Windows-based Virtual Machines (VMs) which are supported to
run on a XenServer host is 450. The maximum number of paravirtualized Linux-based
Virtual Machines (VMs) which are supported to run on a XenServer host is 650.
For any particular system the number of VMs that can run concurrently and with acceptable
performance, will depend on the available resources and the VM workload. XenServer
automatically scales the amount of memory allocated to the Control Domain (dom0) based
on the physical memory available.
Note: In certain circumstances it may be advisable to override this setting if there are more
than 50 VMs per host and the host physical memory is less than 48GB. See section 7.1.1 of
the XenServer 6.2.0 Administrator’s Guide.
How many physical network interfaces does XenServer support?
XenServer supports up to 16 NICs. These may be bonded to create up to 8 logical network
bonds, with a limit of 4 NICs per bond.
How many virtual processors (vCPUs) can XenServer allocate to a VM?
XenServer supports up to 16 vCPUs per guest. The number of vCPUs which can be
supported varies by the guest operating system. Refer to the XenServer 6.2.0 VM User's
Guide for details.
How much memory can XenServer allocate to a VM?
XenServer supports up to 128GB per guest. The amount of memory which can be supported
varies by the guest operating system.
The maximum amount of physical memory addressable by your operating system varies.
Setting the memory to a level greater than the operating system supported limit, may lead to
performance issues within your guest. Some 32-bit Windows operating systems can support
more than 4GB of RAM through use of the physical address extension (PAE) mode. The
limit for 32-bit PV Virtual Machines is 64GB. Please consult your guest operating system
Administrators Guide.
How many virtual disk drives can XenServer allocate to a VM?
XenServer can allocate up to 7 virtual disk drives (VDI), including a virtual DVD-ROM device,
per VM. The number of VDIs which can be supported varies by the guest operating system.
This number is not programmatically enforced, but is the supported limit. Customers can
configure up to 16 VDIs using the xe Command Line Interface (CLI), but this is not a
supported configuration. Refer to the XenServer 6.2.0 VM User's Guide for more details.
How many virtual network interfaces can XenServer allocate to a VM?
XenServer can allocate up to 7 virtual NICs per VM. The number of virtual NICs which can
be supported varies by the guest operating system. Refer to the XenServer 6.2.0 VM User's
Guide for more details.
How does XenServer choose which physical processors it allocates to
the VM?
XenServer does not statically allocate physical processors to any specific VM. Instead,
XenServer dynamically allocates, depending on load, any available logical processors to the
VM. This ensures that processor cycles are used efficiently as the VM can run wherever
there is spare capacity.
How are disk I/O resources split between the VMs?
XenServer uses a fair share resource split for disk I/O resources between VMs. Additionally,
you can also provide a VM higher or lower priority access to disk I/O resources.
How are network I/O resources split between the VMs?
XenServer uses a fair share resource split for network I/O resources between the VMs.
Additionally, you can control bandwidth throttling limits per VM using the Open vSwitch.
Does XenServer support Solaris x86 as a guest operating system?
No. The experimental support for Solaris x86 has been removed from XenServer v6.2.0.
Does XenServer support FreeBSD, NetBSD, or any other BSD variants as
a guest operating system?
XenServer does not support any BSD-based guest operating systems for general-purpose
virtualization deployments. However, FreeBSD 6.3 VMs running on XenServer have been
certified for use in specific Citrix products.
Do I need to run XenCenter on a Windows computer?
Yes. The XenCenter management console runs on a Windows operating system. Refer to
the XenServer 6.2.0 Installation Guide for more details of the system requirements.
Customers who do not wish to run Windows can control their XenServer hosts and pools
using the xe command-line Interface (CLI) installed locally or running on the XenServer host
or the xs console.
Can VMs created with VMware or Hyper-V run on XenServer?
Yes. VMs can be exported and imported using the industry-standard OVF format.
Additionally VMs can be converted in batches using XenServer Conversion Manager. Third
party tools are also available.
Can I convert a VM from the open source version of Xen to XenServer?
No.
What version of NFS is required for remote storage use?
XenServer requires NFS Version 3 over TCP for remote storage use. NFS v4 and NFS over
UDP are not currently supported by XenServer.
Can I use software-based NFS running on a general-purpose server for
remote shared storage?
Yes, although Citrix recommends using a dedicated network attached storage (NAS) device
with NFS Version 3 with high-speed non-volatile caching to achieve acceptable levels of I/O
performance.
Can I use different types of CPUs in the same XenServer resource pool?
Yes. Although Citrix recommends that the same CPU type is used throughout the pool
(homogeneous resource pool) it is possible for hosts with different CPU types to join pool
(heterogeneous). Heterogeneous resource pools are made possible by using the
technologies in recent Intel (FlexMigration) and AMD (Extended Migration) CPUs that
provide CPU "masking" or "leveling". These features allow a CPU to be configured to appear
as providing a different make, model, or functionality than it actually does. This enables you
to create pools of hosts with disparate CPUs but still safely support live migrations. Refer to
the XenServer 6.2.0 Administrator’s Guide for more details.
Can I move a running VM from one host to another?
XenMotion allows live migration of running VMs when hosts share the same storage (in a
pool). Additionally, Storage XenMotion allows migration between hosts which don’t share the
same storage.
Does XenServer High Availability work with local storage?
No. To use all HA features, shared storage is required. This enables VMs to be relocated in
the event of a host failure.
However, HA allows VMs that are stored on local storage to be marked for automatic restart
if the host recovers after a reboot.
What is Dynamic Memory Control (DMC)?
XenServer DMC works by automatically adjusting the memory of running VMs, keeping the amount of memory
allocated to each VM between specified minimum and maximum memory values, guaranteeing performance
and permitting greater density of VMs per server.
Without DMC, when a server is full, starting further VMs will fail with "out of memory" errors: to reduce the
existing VM memory allocation and make room for more VMs you must edit each VM's memory allocation and
then reboot the VM. With DMC enabled, even when the server is full, XenServer will attempt to reclaim memory
by automatically reducing the current memory allocation of running VMs within their defined memory ranges.
Комментариев нет:
Отправить комментарий