中图网文创礼盒,买2个减5元
欢迎光临中图网 请 | 注册
> >
信息时代的计算机科学理论-(英文版)

信息时代的计算机科学理论-(英文版)

出版社:上海交通大学出版社出版时间:2013-06-03
开本: 16开 页数: 386
¥24.9(7.1折)?

预估到手价是按参与促销活动、以最优惠的购买方案计算出的价格(不含优惠券部分),仅供参考,未必等同于实际到手价。

中 图 价:¥27.7(7.9折)定价  ¥35.0 登录后可看到会员价
加入购物车 收藏
运费6元,全场折上9折期间 满39元包邮
?快递不能达地区使用邮政小包,运费14元起
云南、广西、海南、新疆、青海、西藏六省,部分地区快递不可达
本类五星书更多>

信息时代的计算机科学理论-(英文版) 版权信息

  • ISBN:9787313096098
  • 条形码:9787313096098 ; 978-7-313-09609-8
  • 装帧:一般胶版纸
  • 册数:暂无
  • 重量:暂无
  • 所属分类:>

信息时代的计算机科学理论-(英文版) 本书特色

《信息时代的计算机科学理论(英文版)》是上海交通大学致远教材系列之一,由国际著名计算机科学家约翰·霍普克罗夫特教授和拉文德兰·坎南教授编写。本书包含了高维空间、随机图、奇异值分解、随机行走和马尔可夫链、学习算法和VC维、大规模数据问题的算法、聚类、图形模型和置信传播等主要内容,书后有附录及索引。从第2章开始,每章后面均附有适量的练习题。 本书可作为计算机及相关专业高年级本科生或研究生的教材,也可供相关专业技术人员参考。

信息时代的计算机科学理论-(英文版) 内容简介

《信息时代的计算机科学理论(英文版)》是交大致远教材系列之一,由约翰·霍普克罗夫特编著。《信息时代的计算机科学理论(英文版)》简介:Computer Science Theory for the Information Age covers the computer science theory likely to be useful in the next 40 years, including high- dimensional space, random graphs, singular value decomposition. random walks, Markov chains, learning algorithms, VC-dimension, algorithms for massive date problems, clustering. The book also covers graphical models and belief propagation, ranking and voting, sparse vectors, and compressed sensing.The book is intended for either an undergraduate or a graduate theory course in computer science.Prof. John Hopcroft is a world-renowned scientist and an expert on education in computer science. He was awarded the A. M. Turing Award in 1986 for his contributions in theoretical computing and data structure design. Dr. Ravindran Kannan is a principal researcher with Microsoft Research Labs located in India.

信息时代的计算机科学理论-(英文版) 目录

1 Introduction2 High-Dimensional Space2.1 Properties of High-Dimensional Space2.2 The High-Dimensional Sphere2.2.1 The Sphere and the Cube in Higher Dimensions2.2.2 Volume and Surface Area of the Unit Sphere2.2.3 The Volume is Near the Equator2.2.4 The Volume is in a Narrow Annulus2.2.5 The Surface Area is Near the Equator2.3 Volumes of Other Solids2.4 Generating Points Uniformly at Random on the Surface of a Sphere2.5 Gaussians in High Dimension2.6 Bounds on Tail Probability2.7 Random Projection and the Johnson-Lindenstrauss Theorem2.8 Bibliographic Notes2.9 Exercises3 Random Graphs3.1 TheG(n, p) Model3.1.1 Degree Distribution3.1.2 Existence of Triangles in G ( n, d/n )3.2 Phase Transitions3.3 The Giant Component3.4 Branching Processes3.5 Cycles and Full Connectivity3.5.1 Emergence of Cycles3.5.2 Full Connectivity3.5.3 Threshold for O (Inn) Diameter3.6 Phase Transitions for Monotone Properties3.7 Phase Transitions for CNF-sat3.8 Nonuniform and Growth Models of Random Graphs3.8.1 Nonuniform Models3.8.2 Giant Component in Random Graphs with Given Degree Distribution ... 3.9 Growth Models3.9.1 Growth Model Without Preferential Attachment3.9.2 A Growth Model with Preferential Attachment3.10 Small World Graphs3.11 Bibliographic Notes3.12 Exercises4 Singular Value Decomposition (SVD)4.1 Singular Vectors4.2 Singular Value Decomposition (SVD)4.3 Best Rank k Approximations4.4 Power Method for Computing the Singular Value Decomposition4.5 Applications of Singular Value Decomposition4.5.1 Principal Component Analysis4.5.2 Clustering a Mixture of Spherical Gaussians4.5.3 An Application of SVD to a Discrete Optimization Problem4.5.4 Spectral Decomposition4.5.5 Singular Vectors and Ranking Documents4.6 Bibliographic Notes4.7 Exercises5 Random Walks and Markov Chains5.1 Stationary Distribution5.2 Electrical Networks and Random Walks5.3 Random Walks on Undirected Graphs with Unit Edge Weights5.4 Random Walks in Euclidean Space5.5 The Web as a Markov Chain5.6 Markov Chain Monte Carlo5.6.1 Metropolis-Hasting Algorithm5.6.2 Gibbs Sampling5.7 Convergence of Random Walks on Undirected Graphs5.7.1 Using Normalized Conductance to Prove Convergence5.8 Bibliographic Notes5.9 Exercises6 Learning and VC-Dimension6.1 Learning6.2 Linear Separators, the Perceptron Algorithm, and Margins6.3 Nonlinear Separators, Support Vector Machines, and Kernels6.4 Strong and Weak Learning-Boosting6.5 Number of Examples Needed for Prediction: VC-Dimension6.6 Vapnik-Chervonenkis or VC-Dimension6.6.1 Examples of Set Systems and Their VC-Dimension6.6.2 The Shatter Function6.6.3 Shatter Function for Set Systems of Bounded VC-Dimension6.6.4 Intersection Systems6.7 The VC Theorem6.8 Bibliographic Notes6.9 Exercises7 Algorithms for Massive Data Problems7.1 Frequency Moments of Data Streams7.1.1 Number of Distinct Elements in a Data Stream7.1.2 Counting the Number of Occurrences of a Given Element7.1.3 Counting Frequent Elements7.1.4 The Second Moment7.2 Sketch of a Large Matrix7.2.1 Matrix Multiplication Using Sampling7.2.2 Approximating a Matrix with a Sample of Rows and Columns ... 7.3 Sketches of Documents7.4 Exercises8 Clustering8.1 Some Clustering Examples8.2 A Simple Greedy Algorithm for k-clustering8.3 Lloyd's Algorithm for k-means Clustering8.4 Meaningful Clustering via Singular Value Decomposition8.5 Recursive Clustering Based on Sparse Cuts8.6 Kernel Methods8.7 Agglomerative Clustering8.8 Communities, Dense Submatrices8.9 Flow Methods8.10 Linear Programming Formulation8.11 Finding a Local Cluster Without Examining the Whole Graph8.12 Axioms for Clustering8.12.1 An Impossibility Result8.12.2 A Satisfiable Set of Axioms8.13 Exercises9 Graphical Models and Belief Propagation9.1 Bayesian or Belief Networks9.2 Markov Random Fields9.3 Factor Graphs9.4 Tree Algorithms9.5 Message Passing Algorithm9.6 Graphs with a Single Cycle9.7 Belief Update in Networks with a Single Loop9.8 Maximum Weight Matching9.9 Warning Propagation9.10 Correlation Between Variables9.11 Exercises10 Other Topics10.1 Rankings10.2 Hare System for Voting10.3 Compressed Sensing and Sparse Vectors10.3.1 Unique Reconstruction of a Sparse Vector10.3.2 The Exact Reconstruction Property10.3.3 Restricted Isometry Property10.4 Applications10.4.1 Sparse Vector in Some Coordinate Basis10.4.2 A Representation Cannot be Sparse in Both Time and Frequency Domains10.4.3 Biological10.4.4 Finding Overlapping Cliques or Communities10.4.5 Low Rank Matrices10.5 Exercises11 Appendix11.1 Asymptotic Notation11.2 Useful Inequalities11.3 Sums of Series11.4 Probability11.4.1 Sample Space, Events, Independence11.4.2 Variance11.4.3 Variance of Sum of Independent Random Variables11.4.4 Covariance11.4.5 The Central Limit Theorem11.4.6 Median11.4.7 Unbiased Estimators11.4.8 Probability Distributions11.4.9 Maximum Likelihood Estimation MLE11.4.10 Tail Bounds11.4.11 Chernoff Bounds: Bounding of Large Deviations11.4.12 Hoeffding's Inequality11.5 Generating Functions11.5.1 Generating Functions for Sequences Defined by Recurrence Relationships11.5.2 Exponential Generating Function11.6 Eigenvalues and Eigenvectors11.6.1 Eigenvalues and Eigenvectors11.6.2 Symmetric Matrices11.6.3 Extremal Properties of Eigenvalues11.6.4 Eigenvalues of the Sum of Two Symmetric Matrices11.6.5 Norms11.6.6 Important Norms and Their Properties11.6.7 Linear Algebra11.6.8 Distance Between Subspaces11.7 Miscellaneous11.7.1 Variational Methods11.7.2 Hash Functions11.7.3 Catalan Numbers11.7.4 Sperner's Lemma11.8 ExercisesIndexReferences
展开全部
商品评论(0条)
暂无评论……
书友推荐
编辑推荐
返回顶部
中图网
在线客服