About Mysql connect problem

spiketang 2004-06-28 05:42:31
I have install MySql DataBase,
at the beginning , it can be used normally,
but when I have do something special things it works in abnormally:
C:\>Mysql\bin>MysqlAdmin -u gz-spike password spike

And now it can do
C:\>Mysql\bin>Mysql (success)
but it is only can do any operation on the database test,except all other databases;
e.g.:
>use mysql;
failure!
it seems like:Access denied user "@localhost" to database mysql
>use test;
database changed!

if you want create a database ,it also failed.

and I want use root user to login
>mysql -u root [mysql] or [test]
it also was denied.

I have copy the the database of [mysql]'s [user] table to the database [test] , and use command to see it
>use test;
database changed!
>select * from user;
and , it can see two 'root' user name in the table, but Why I can not use the root to login in?

So, how to deal with it? please give me some help!
thanks a lot!!!!!!!!!!1
...全文
92 4 打赏 收藏 转发到动态 举报
写回复
用AI写文章
4 条回复
切换为时间正序
请发表友善的回复…
发表回复
spiketang 2004-08-20
  • 打赏
  • 举报
回复
The question have been Solved.
I have made a mistakes , there is already one version MySql was installed company the Apache installing In my computer , So when I install another version mysql , the previous Mysql-Server is still available, after I change the regedit of the MySql-Server's path, all is OK!
thanks for your help!
spiketang 2004-06-29
  • 打赏
  • 举报
回复
if there are wrong with authorization user , with the command C:\mysql\bin>mysql
it should not operate every database; it has not any relations with the root user!
I can get in in the databases and can not operate test database, but mysql database, why?
and all other authorization user are invalid

I have install it again , No any improvement happened.
loveflea 2004-06-28
  • 打赏
  • 举报
回复
建议删除原来的授权用户,新建一个root用户!
loveflea 2004-06-28
  • 打赏
  • 举报
回复
1、停下你的mysql
net stop mysql

2、执行
mysqld-nt --skip-grant-tables

3、再能外一个shell中直接运行
mysql.exe

4、show databases;
或添加或修改你的用户信息


或者

两个方法

一个是用 默认的 mysql 库代替原有的.
这样所有的用户权限就恢复到初始安装 MySQL 状态.
这个方法并不是最理想的.

另一个就是 在启动时 加上 skip-grant-tables

比如在 WIN32 下,修改 my.ini 文件,在 [mysqld] 节下加入.
Linux 下.可以修改 /etc/my.cnf 文件 .
[mysqld]
basedir = ....
datadir = ....
skip-grant-tables
...

保存后.重新启动 MySQL 服务就行了.



注意 skip-grant-tables 是禁用用户权限检查.任何登录都会有超级权限.
所以在修改了用户授权表后,
请删除上面所添加的东东,
然后重新启动服务.... 切切切...
Foundations for Analytics with Python by Clinton W. Brownley Copyright © 2016 Clinton Brownley. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. Overview of Chapters Chapter 1, Python Basics We’ll begin by exploring how to create and run a Python script. This chapter focuses on basic Python syntax and the elements of Python that you need to know for later chapters in the book. For example, we’ll discuss basic data types such as numbers and strings and how you can manipulate them. We’ll also cover Preface | xvii the main data containers (i.e., lists, tuples, and dictionaries) and how you use them to store and manipulate your data, as well as how to deal with dates, as dates often appear in business analysis. This chapter also discusses programming concepts such as control flow, functions, and exceptions, as these are important elements for including business logic in your code and gracefully handling errors. Finally, the chapter explains how to get your computer to read a text file, read multiple text files, and write to a CSV-formatted output file. These are important techniques for accessing input data and retaining specific output data that I expand on in later chapters in the book. Chapter 2, Comma-Separated Values (CSV) Files This chapter covers how to read and write CSV files. The chapter starts with an example of parsing a CSV input file “by hand,” without Python’s built-in csv module. It transitions to an illustration of potential problems with this method of parsing and then presents an example of how to avoid these potential problems by parsing a CSV file with Python’s csv module. Next, the chapter discusses how to use three different types of conditional logic to filter for specific rows from the input file and write them to a CSV output file. Then the chapter presents two dif‐ ferent ways to filter for specific columns and write them to the output file. After covering how to read and parse a single CSV input file, we’ll move on to discus‐ sing how to read and process multiple CSV files. The examples in this section include presenting summary information about each of the input files, concate‐ nating data from the input files, and calculating basic statistics for each of the input files. The chapter ends with a couple of examples of less common proce‐ dures, including selecting a set of contiguous rows and adding a header row to the dataset. Chapter 3, Excel Files Next, we’ll cover how to read Excel workbooks with a downloadable, add-in module called xlrd. This chapter starts with an example of introspecting an Excel workbook (i.e., presenting how many worksheets the workbook contains, the names of the worksheets, and the number of rows and columns in each of the worksheets). Because Excel stores dates as numbers, the next section illustrates how to use a set of functions to format dates so they appear as dates instead of as numbers. Next, the chapter discusses how to use three different types of condi‐ tional logic to filter for specific rows from a single worksheet and write them to a CSV output file. Then the chapter presents two different ways to filter for specific columns and write them to the output file. After covering how to read and parse a single worksheet, the chapter moves on to discuss how to read and process all worksheets in a workbook and a subset of worksheets in a workbook. The exam‐ ples in these sections show how to filter for specific rows and columns in the worksheets. After discussing how to read and parse any number of worksheets in a single workbook, the chapter moves on to review how to read and process mul‐tiple workbooks. The examples in this section include presenting summary infor‐ mation about each of the workbooks, concatenating data from the workbooks, and calculating basic statistics for each of the workbooks. The chapter ends with a couple of examples of less common procedures, including selecting a set of contiguous rows and adding a header row to the dataset. Chapter 4, Databases Here, we’ll cover how to carry out basic database operations in Python. The chapter starts with examples that use Python’s built-in sqlite3 module so that you don’t have to install any additional software. The examples illustrate how to carry out some of the most common database operations, including creating a database and table, loading data in a CSV input file into a database table, updat‐ ing records in a table using a CSV input file, and querying a table. When you use the sqlite3 module, the database connection details are slightly different from the ones you would use to connect to other database systems like MySQL, Post‐ greSQL, and Oracle. To show this difference, the second half of the chapter dem‐ onstrates how to interact with a MySQL database system. If you don’t already have MySQL on your computer, the first step is to download and install MySQL. From there, the examples mirror the sqlite3 examples, including creating a database and table, loading data in a CSV input file into a database table, updat‐ ing records in a table using a CSV input file, querying a table, and writing query results to a CSV output file. Together, the examples in the two halves of this chapter provide a solid foundation for carrying out common database operations in Python. Chapter 5, Applications This chapter contains three examples that demonstrate how to combine techni‐ ques presented in earlier chapters to tackle three different problems that are rep‐ resentative of some common data processing and analysis tasks. The first application covers how to find specific records in a large collection of Excel and CSV files. As you can imagine, it’s a lot more efficient and fun to have a computer search for the records you need than it is to search for them yourself. Opening, searching in, and closing dozens of files isn’t fun, and the task becomes more and more challenging as the number of files increases. Because the problem involves searching through CSV and Excel files, this example utilizes a lot of the material covered in Chapters 2 and 3. The second application covers how to group or “bin” data into unique categories and calculate statistics for each of the categories. The specific example is parsing a CSV file of customer service package purchases that shows when customers paid for particular service packages (i.e., Bronze, Silver, or Gold), organizing the data into unique customer names and packages, and adding up the amount of time each customer spent in each package. The example uses two building blocks, creating a function and storing data in a dictionary, which are introduced in Chapter 1 but aren’t used in Chapters 2, 3, and 4. It also introduces another new technique: keeping track of the previous row you processed and the row you’re currently processing, in order to calculate a statistic based on values in the two rows. These two techniques—grouping or binning data with a dictionary and keeping track of the current row and the previous row—are very powerful capabilities that enable you to handle many common analysis tasks that involve events over time. The third application covers how to parse a text file, group or bin data into cate‐ gories, and calculate statistics for the categories. The specific example is parsing a MySQL error log file, organizing the data into unique dates and error messages, and counting the number of times each error message appeared on each date. The example reviews how to parse a text file, a technique that briefly appears in Chapter 1. The example also shows how to store information separately in both a list and a dictionary in order to create the header row and the data rows for the output file. This is a reminder that you can parse text files with basic string oper‐ ations and another good example of how to use a nested dictionary to group or bin data into unique categories. Chapter 6, Figures and Plots In this chapter, you’ll learn how to create common statistical graphs and plots in Python with four plotting libraries: matplotlib, pandas, ggplot, and seaborn. The chapter begins with matplotlib because it’s a long-standing package with lots of documentation (in fact, pandas and seaborn are built on top of matplot lib). The matplotlib section illustrates how to create histograms and bar, line, scatter, and box plots. The pandas section discusses some of the ways pandas simplifies the syntax you need to create these plots and illustrates how to create them with pandas. The ggplot section notes the library’s historical relationship with R and the Grammar of Graphics and illustrates how to use ggplot to build some common statistical plots. Finally, the seaborn section discusses how to cre‐ ate standard statistical plots as well as plots that would be more cumbersome to code in matplotlib. Chapter 7, Descriptive Statistics and Modeling Here, we’ll look at how to produce standard summary statistics and estimate regression and classification models with the pandas and statsmodels packages. pandas has functions for calculating measures of central tendency (e.g., mean, median, and mode), as well as for calculating dispersion (e.g., variance and stan‐ dard deviation). It also has functions for grouping data, which makes it easy to calculate these statistics for different groups of data. The statsmodels package has functions for estimating many types of regression and classification models. The chapter illustrates how to build multivariate linear regression and logistic classification models based on data in pandas DataFrames and then use the mod‐ els to predict output values for new input data. Chapter 8, Scheduling Scripts to Run Automatically This chapter covers how to schedule your scripts to run automatically on a rou‐ tine basis on both Windows and macOS. Until this chapter, we ran the scripts manually on the command line. Running a script manually on the command line is convenient when you’re debugging the script or running it on an ad hoc basis. However, it can be a nuisance if your script needs to run on a routine basis (e.g., daily, weekly, monthly, or quarterly), or if you need to run lots of scripts on a routine basis. On Windows, you create scheduled tasks to run scripts automati‐ cally on a routine basis. On macOS, you create cron jobs, which perform the same actions. This chapter includes several screenshots to show you how to cre‐ ate and run scheduled tasks and cron jobs. By scheduling your scripts to run on a routine basis, you don’t ever forget to run a script and you can scale beyond what’s possible when you’re running scripts manually on the command line. Chapter 9, Where to Go from Here The final chapter covers some additional built-in and add-in Python modules and functions that are important for data processing and analysis tasks, as well as some additional data structures that will enable you to efficiently handle a variety of complex programming problems you may run into as you move beyond the topics covered in this book. Built-ins are bundled into the Python installation, so they are immediately available to you when you install Python. The built-in mod‐ ules discussed in this chapter include collections, random, statistics, iter tools, and operator. The built-in functions include enumerate, filter, reduce, and zip. Add-in modules don’t come with the Python installation, so you have to download and install them separately. The add-in modules discussed in this chapter include NumPy, SciPy, and Scikit-Learn. We also take a look at some additional data structures that can help you store, process, or analyze your data more quickly and efficiently, such as stacks, queues, trees, and graphs.

56,677

社区成员

发帖
与我相关
我的任务
社区描述
MySQL相关内容讨论专区
社区管理员
  • MySQL
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧