designsoliman - Fotolia
IT administrators can use Docker for Windows to easily deploy Docker containers in Windows environments, but they should be aware of the platform's limitations.
Docker for Windows can experience an error with permissions for shared volumes. Docker for Windows employs a fixed default value for read, write and execute permissions applied to users, reads, execution and groups.
As long as the application can accommodate the shared volume permissions configuration that Docker for Windows uses, the application should have no problem accessing a shared volume. However, if the application requires different permissions than the value that Docker for Windows uses, IT administrators might encounter data directory errors such as:
Data directory is readable by other users. Please change the permissions so that the directory cannot be listed by other users.
In practice, Docker for Windows implements host-mounted storage volumes based on the Microsoft server message block (SMB) protocol. But the SMB protocol doesn't support fine control over permissions for files and directories using the traditional Unix-style change mode (chmod) command scheme. The chmod approach basically uses an octal code as a mask to set bits that enable or disable the rights of users and groups.
Docker for Windows doesn't follow the chmod approach, so it must set a single fixed permission. It uses an octal permission code of 0755 that -- in the Unix/Linux environment -- should enable users to read, write and execute to the shared volume, but which prevent groups and others from writing to it.
Consequently, applications that require different permissions might need a different mask, and they might not be able to use the shared volume. The solution is often to employ different storage resources for the application, such as non-host-mounted volumes. Alternatively, developers might be able to recode the application to use the default permissions correctly.
Both hypervisor- and container-based virtualization have proven to be sound and reliable enterprise-grade technologies, but they aren't impervious to problems. Careful deployment, proper optimizations and well-considered management policies can all have a positive effect on the performance and reliability of virtual instances and virtualized workloads across the enterprise.
Dig Deeper on Application virtualization
Related Q&A from Stephen J. Bigelow
Though the Open19 initiative and Open Compute Project seem to have a similar goal, they do differ in type of support, hardware requirements and ... Continue Reading
A do-it-yourself approach with hyper-converged infrastructure can lead to trouble when software-defined features just won't work. See how the WSSD ... Continue Reading
With the right tools and resources, VM backup and recovery can be easier. Consider factors such as product compatibility and future business needs ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.