about:placement new

sunyong 2005-02-25 01:12:56
> 其中的new(__p) _Tp(__val)我很不理解?

這種 new 稱為 placement new。在 C++ Primer 3/e 8.4.5節可以查到,Thinking in C++ 上應該也有描述。
意思是,在一塊 allocated memory(由例中的 __p 指向)取足夠空間建構一個新的 object,並以 __val 為初值。
而不像一般的 new 在操作當時才從 heap 中分配空間。

使用它之前,必須先 #include <new>。

#include <new>
class Foo;
...
char* buf = new char[500]; // 分配一塊空間,500 bytes
Foo* pb1 = new(buf) Foo(abc); // 從 buf 所指空間開始,建構一個 Foo object(將耗用 sizeof(Foo)),以 abc 為初值。
buf += sizeof(Foo); // 移動 pointer
Foo* pb2 = new(buf) Foo(def); // 從 buf 所指空間開始,建構一個 Foo object(將耗用 sizeof(Foo)),以 def 為初值。

-- 侯捷


----- Original Message -----
From: sunyong
To: jjhou
Sent: Thursday, February 24, 2005 3:20 PM
Subject: about:new(__p) _Tp(__val)


侯先生,你好:
我在stl_alloc.h文件中看到这么一行代码
void construct(pointer __p, const _Tp& __val) { new(__p) _Tp(__val); }
其中的new(__p) _Tp(__val)我很不理解?我也自己做了个例子,结果也
不能编译。能否帮忙解释一下?

孙永

Best regards,
...全文
110 6 打赏 收藏 转发到动态 举报
写回复
用AI写文章
6 条回复
切换为时间正序
请发表友善的回复…
发表回复
healer_kx 2005-02-25
  • 打赏
  • 举报
回复
见STL源码剖析二三章, 关键在于显示的调用你的构造函数, 但是又不malloc。
寻开心 2005-02-25
  • 打赏
  • 举报
回复
不完全一样, placement 不仅仅是把对象的定位在制定的内存,而且在该位置完成了对象的初始化工作呢
Wolf0403 2005-02-25
  • 打赏
  • 举报
回复
通常类似

void * operator new ( size_t _sz, void * p )
{
return p;
}
寻开心 2005-02-25
  • 打赏
  • 举报
回复
在msdn上面看 How new Works 这个index
注意placement只有对用户自己定义的类型才有用,对于float,char,int之类的没有用途
寻开心 2005-02-25
  • 打赏
  • 举报
回复
msdn上面的原始例子啊

#include <malloc.h>
#include <memory.h>

class Blanks
{
public:
Blanks(){}
void *operator new( size_t stAllocateBlock, char chInit );
};
void *Blanks::operator new( size_t stAllocateBlock, char chInit )
{
void *pvTemp = malloc( stAllocateBlock );
if( pvTemp != 0 )
memset( pvTemp, chInit, stAllocateBlock );
return pvTemp;
}
int main()
{
Blanks *a5 = new( 0xa5 ) Blanks;

return a5 != 0;
}
redleaves 2005-02-25
  • 打赏
  • 举报
回复
这种new实际上就是帮你调用构造函数.它不为你分配内存空间,你要自己先为你要new的对象分配足够的空间,并且持有它,一直到你new的对象销毁.

调用方法.
new(内存块首地址) 类型
译序 vii 中英简繁术语对照 ix 目录 xvii 序言 xxi 致谢 xxiii 导读 1. 让自己习惯c++ accustoming yourself to c++ 条款01:视c++ 为一个语言联邦 view c++ as a federation of languages 条款02:尽量以const, enum, inline替换 #define prefer consts,enums, and inlines to #defines. 条款03:尽可能使用const use const whenever possible. 条款04:确定对象被使用前已先被初始化 make sure that objects are initialized before they're used. 2. 构造/析构/赋值运算 constructors, destructors, and assignment operators 条款05:了解c++ 默默编写并调用哪些函数 know what functions c++ silently writes and calls. 条款06:若不想使用编译器自动生成的函数,就该明确拒绝 explicitly disallow the use of compiler-generated functions you do not want. 条款07:为多态基类声明virtual析构函数 declare destructors virtual in polymorphic base classes. 条款08:别让异常逃离析构函数 prevent exceptions from leaving destructors. 条款09:绝不在构造和析构过程中调用virtual函数 never call virtual functions during construction or destruction. 条款10:令operator= 返回一个reference to *this have assignment operators return a reference to *this. 条款11:在operator= 中处理“自我赋值” handle assignment to self in operator=. 条款12:复制对象时勿忘其每一个成分 copy all parts of an object. 3. 资源管理 resource management 条款13:以对象管理资源 use objects to manage resources. 条款14:在资源管理类中小心coping行为 think carefully about copying behavior in resource-managing classes. 条款15:在资源管理类中提供对原始资源的访问 provide access to raw resources in resource-managing classes. 条款16:成对使用new和delete时要采取相同形式 use the same form in corresponding uses of new and delete. 条款17:以独立语句将newed对象置入智能指针 store newed objects in smart pointers in standalone statements. 4. 设计与声明 designs and declarations 条款18:让接口容易被正确使用,不易被误用 make interfaces easy to use correctly and hard to use incorrectly. 条款19:设计class犹如设计type treat class design as type design. 条款20:宁以pass-by-reference-to-const替换pass-by-value prefer pass-by-reference-to-const to pass-by-value. 条款21:必须返回对象时,别妄想返回其reference don't try to return a reference when you must return an object. 条款22:将成员变量声明为private declare data members private. 条款23:宁以non-member、non-friend替换member函数 prefer non-member non-friend functions to member functions. 条款24:若所有参数皆需类型转换,请为此采用non-member函数 declare non-member functions when type conversions should apply to all parameters. 条款25:考虑写出一个不抛异常的swap函数 consider support for a non-throwing swap. 5. 实现 implementations 条款26:尽可能延后变量定义式的出现时间 postpone variable definitions as long as possible. 条款27:尽量少做转型动作 minimize casting. 条款28:避免返回handles指向对象内部成分 avoid returning “handles” to object internals. 条款29:为“异常安全”而努力是值得的 strive for exception-safe code. 条款30:透彻了解inlining的里里外外 understand the ins and outs of inlining. 条款31:将文件间的编译依存关系降至最低 minimize compilation dependencies between files. 6. 继承与面向对象设计 inheritance and object-oriented design 条款32:确定你的public继承塑模出is-a关系 make sure public inheritance models “is-a.” 条款33:避免遮掩继承而来的名称 avoid hiding inherited names. 条款34:区分接口继承和实现继承 differentiate between inheritance of interface and inheritance of implementation. 条款35:考虑virtual函数以外的其他选择 consider alternatives to virtual functions. 条款36:绝不重新定义继承而来的non-virtual函数 never redefine an inherited non-virtual function. 条款37:绝不重新定义继承而来的缺省参数值 never redefine a function's inherited default parameter value. 条款38:通过复合塑模出has-a或“根据某物实现出” model “has-a” or “is-implemented-in-terms-of” through composition. 条款39:明智而审慎地使用private继承 use private inheritance judiciously. 条款40:明智而审慎地使用多重继承 use multiple inheritance judiciously. 7. 模板与泛型编程 templates and generic programming 条款41:了解隐式接口和编译期多态 understand implicit interfaces and compile-time polymorphism. 条款42:了解typename的双重意义 understand the two meanings of typename. 条款43:学习处理模板化基类内的名称 know how to access names in templatized base classes. 条款44:将与参数无关的代码抽离templates factor parameter-independent code out of templates. 条款45:运用成员函数模板接受所有兼容类型 use member function templates to accept “all compatible types.” 条款46:需要类型转换时请为模板定义非成员函数 define non-member functions inside templates when type conversions are desired. 条款47:请使用traits classes表现类型信息 use traits classes for information about types. 条款48:认识template元编程 be aware of template metaprogramming. 8. 定制new和delete customizing new and delete 条款49:了解new-handler的行为 understand the behavior of the new-handler. 条款50:了解new和delete的合理替换时机 understand when it makes sense to replace new and delete. 条款51:编写new和delete时需固守常规 adhere to convention when writing new and delete. 条款52:写了placement new也要写placement delete write placement delete if you write placement new. 9. 杂项讨论 miscellany 条款53:不要轻忽编译器的警告 pay attention to compiler warnings. 条款54:让自己熟悉包括tr1在内的标准程序库 familiarize yourself with the standard library, including tr1. 条款55:让自己熟悉boost familiarize yourself with boost. a 本书之外 b 新旧版条款对映 索引
Apache Hadoop YARN is the modern distributed operating system for big data applications. It morphed the Hadoop compute layer to be a common resource-management platform that can host a wide variety of applications. Many organizations leverage YARN in building their applications on top of Hadoop without repeatedly worrying about resource management, isolation, multitenancy issues, etc. The Hadoop Distributed File System (HDFS) is the primary data storage system used by Hadoop applications. It employs a NameNode and DataNode architecture to implement a distributed file system that provides high-performance access to data across highly scalable Hadoop clusters. Wangda Tan and Wei-Chiu Chuang the current status of Apache Hadoop 3.x—how it’s used today in deployments large and small, and they dive into the exciting present and future of Hadoop 3.x—features that further strengthen Hadoop as the primary resource-management platform and the storage system for enterprise data centers. They explore the current status and the future promise of features and initiatives for both YARN and HDFS of Hadoop 3.×. For YARN 3.x, there is powerful container placement, global scheduling, support for machine learning (Spark) and deep learning (TensorFlow) workloads through GPU and field-programmable gate array (FPGA) scheduling and isolation support, extreme scale with YARN federation, containerized apps on YARN, support for long-running services (alongside applications) natively without any changes, seamless application/services upgrades, powerful scheduling features like application priorities, intra-queue preemption across applications, and operational enhancements including insights through Timeline Service v2, a new web UI, better queue management, etc. Also, HDFS 3.0 announced GA for erasure coding, which doubles the storage efficiency of data and thus reduces the cost of storage for enterprise use cases. HDFS added support for multiple standby NameNodes for better availability. For better reliability of metadata and easier operations, Journal nodes have been enhanced to sync the edit log segments to protect against rolling failures. Disk balancing within a DataNode was another important feature added to ensure disks are evenly utilized in a DataNode, which also ensures better aggregate throughput and prevents from lopsided utilization if new disks are added or replaced in a DataNode. The HDFS team is currently driving the Ozone initiative, which lays the foundation of the next generation of storage architecture for HDFS where data blocks are organized in storage containers for higher scale and handling of small objects in HDFS. The Ozone project also includes an object store implementation to support new use cases. And you’ll leave with all the knowledge of how to upgrade painlessly from 2.x to 3.x to get all the benefits.
Cloud computing is a novel computing paradigm which has changed the way enterprise or Internet computing is performed. Today, for almost all the sectors in the world, cloud computing is synonym to on-demand provisioning and delivery of IT services in a pay-as-you-go model. The success story of cloud computing as a technology is credited to the long-term efforts of computing research community across the globe. Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) are the three major cloud product sectors. Each one of these product sectors has their effects and reaches to various industries. If forecasts are to be believed, then more than two-third of all the enterprises across the globe will be entirely run in cloud by 2026. These enthusiastic figures have led to huge funding for research and development in cloud computing and related technologies. University researchers, research labs in industry, and scholars across the globe have recreated the whole computing world into a new cloud enabled world. This has been only possible by coordinated efforts into this direction. Today, almost every university across the globe has cloud computing and its related technologies included in their computer science curriculum. Additionally, there are extensive efforts on innovation and technology creation in the direction of cloud computing. These efforts are much visible in the reputed cloud computing research platforms like international conferences and journals. We feel that there is a significant need to systematically present quality research findings of recent advances in cloud computing for the benefit of community of researchers, educators, practitioners, and industries. Although there are large numbers of journals and conferences available, there is a lack of comprehensive and in-depth tutored analysis on various new developments in the field of cloud computing. This book on “Research Advances in Cloud Computing” discusses various new trends, designs, implementations, outcomes, and directions in the various areas of cloud computing. This book has been organized into three sections: 1. Programming model, infrastructure, and runtime 2. Resource Management 3. Security. The first chapter on “Serverless Computing: Current Trends and Open Problems” covers various serverless platforms, APIs, their key characteristics, technical challenges, and related open problems. Recently, enterprise application architectures are shifting to containers and micro-services, and it provides enough reasons for serverless computing. The chapter provides detailed requirements of different pro- gramming models, platforms, and the need of significant research and development efforts to make it matured enough for widespread adoption. Cloud providers face the important challenge regarding resource management and aim to provide services with high availability relying on finite computational resources and limited physical infrastructure. Their key challenge is to manage resources in an optimal way and to estimate how physical and logical failures can impact on users’ perception. The second chapter on “Highly Available Clouds: System Modeling, Evaluations and Open Challenges”, presents literature survey on high availability of cloud and mentions the main approaches for it. It explores computational modeling theories to represent a cloud infrastructure focusing on how to estimate and model cloud availability. The third chapter on “Big Data Analytics in Cloud—A Streaming Approach” discusses streaming approach for data analytics in cloud. Big data and cloud have become twin words—used sometimes interchangeably. Interpretation of big data brings in idea of mining and analytics. There is significant literature on cloud that discusses infrastructure and architecture but a very little literature for algorithms required for mining and analytics. This chapter focuses on online algorithms that can be used for distributed, unstructured data for learning and analytics over Cloud. It also discusses their time complexity, presents architecture for deploying them over cloud, and concludes with presenting relevant open research directions. Cloud data centers must be capable to offer scalable software services, which require an infrastructure with a significant amount of resources. Such resources are managed by specific software to ensure service-level agreements based on one or more performance metrics. Within such infrastructure, approaches to meet non-functional requirements can be split into various artifacts, distributed across different operational layers, which operate together with the aim of reaching a specific target. Existing studies classify such approaches using different terms, which usually are used with conflicting meanings by different people. Therefore, it is necessary a common nomenclature defining different artifacts, so they can be organized in a more scientific way. The fourth chapter on “A Terminology to Classify Artifacts for Cloud Infrastructure” proposes a comprehensive bottom-up classification to identify and classify approaches for system artifacts at the infras- tructure level, and organize existing literature using the proposed classification. The fifth chapter focuses on “Virtual Networking with Azure for Hybrid Cloud Computing in Aneka”. It provides a discussion on the need of inter-cloud com- munication in the emerging hybrid, public, or federated clouds. Later, they provide user and provider constraints. It also covers two practical cases to illustrate the theoretical concepts of resource allocation as well as have discussed the open challenges that resource management will face in the coming years. The tenth chapter on “Recent Developments in Resource Management in Cloud Computing and Large Computing Clusters” provides a comprehensive and detailed overview of overall cloud computing resource allocation framework with a focus on various resource scheduling algorithms. This chapter also provides a definitive direction toward cloud scheduling solutions, architectures, and fairness algorithms. The eleventh chapter on “Resource Allocation for Cloud Infrastructures: Taxonomies and Research Challenges” provides a classification of VM place- ments solutions in the form of taxonomies. These taxonomies are prepared for conceptualization of VM placement problem as provider–broker setting, and framing it as an optimization problem. Authors also comment on the formation of cloud markets to provide a basis for multi-objective VM placement algorithms. The twelth chapter on “Many-Objective Optimization for Virtual Machine Placement in Cloud Computing” presents a comprehensive discussion on virtual machine placement problem and extends the discussion by proposing many objec- tive VM placement algorithms for initial VM placement and reconfiguration. It also gives an overview of open research problems at the end of the chapter to provide the scope of future work toward fully dynamic multi-objective VM placement problems. The thirteenth chapter on “Performance Modeling and Optimization of Live Migration of Virtual Machines in Cloud Infrastructure” is based on improvement of the pre-copy algorithm for live migration system. The improved pre-copy algorithm is developed by three models: (i) compression model, (ii) prediction model, and (iii) performance model. Each model is used to evaluate downtime and total migration time of different workloads. The first model performs migration of different sizes of VM with three workloads: (i) idle system, (ii) kernel compile, and (iii) static web server. Prediction model works with adaptive dirty rate and adaptive data rate to evaluate complex workloads running in a VM. The performance model is used to find dirty pages using dirty page rate model. It is observed that both prediction model and performance model work efficiently than the existing framework of Xen. It concludes that three proposed models are able to improve pre-copy and the results are tested for the same. Security and privacy being a very active and hot topic of research and discussion these days, we have five chapters dedicated to the relevant issues associated with cloud computing security. Isolated containers are rapidly becoming a great alter- native to traditional virtualized environments. The fourteenth chapter on “Analysis of Security in Modern Container Platforms” makes two important contributions. First, it provides a detailed analysis of current security arrangements in the con- tainer platforms. Second, it offers an experimental analysis of containers by pro- viding details on common threat and Vulnerabilities Exposures (CVEs) exploits. This twofold analysis helps in comparing the CVE exploits to be able to compare with the state-of-the-art security requirements by the popular literature. The fifteenth chapter on “Identifying Evidence for Cloud Forensic Analysis” discusses forensic analysis and post-attack evidence collection on the cloud computing infrastructures. Authors describe the evidence collection activity at three different places which are at Intrusion Detection System (IDS), cloud provider API calls, and VM system calls. It shows a step-by-step attack scenario reconstruction using the proposed prolog-based tool following the proposed evidence collection approach. Forensic analysis of cloud computing infrastructures is still in its infancy and authors provide directions for data collection and forensically capable clouds. The sixteenth chapter on “An Access Control Framework for Secure and Interoperable Cloud Computing Applied to the Healthcare Domain” addresses various health record security issues and provides an FSICC framework (Framework for Secure and Interoperable Cloud Computing) that provides a mechanism for multiple sources to register cloud, programming, and web services and security requirements for use by applications. Future research directions are provided at the end of this chapter to help the enthusiastic readers about the open areas. The seventeenth chapter on “Security and Privacy Issues in Outsourced Personal Health Record” provides a detailed survey on existing personal health record management systems (PHRMSs) considering the security and privacy features provided by each one of them. This state-of-the-art survey is extended by giving pointers to multiple open research problems in the healthcare domain. The last in the series of five chapters dedicated to cloud security is a chapter on “Applications of Trusted Computing in Cloud Context”. Trusted computing para- digm has been considered as one of the important security research milestones to leverage various security solutions. This chapter investigates applications of trusted computing in cloud computing areas where security threats exist, namely in live virtual machine migration.

64,654

社区成员

发帖
与我相关
我的任务
社区描述
C++ 语言相关问题讨论,技术干货分享,前沿动态等
c++ 技术论坛(原bbs)
社区管理员
  • C++ 语言社区
  • encoderlee
  • paschen
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
  1. 请不要发布与C++技术无关的贴子
  2. 请不要发布与技术无关的招聘、广告的帖子
  3. 请尽可能的描述清楚你的问题,如果涉及到代码请尽可能的格式化一下

试试用AI创作助手写篇文章吧