ubuntu 17.04 install redis

源码安装

1
2
3
4
5
6
$ wget http://download.redis.io/releases/redis-4.0.1.tar.gz
$ tar xzf redis-4.0.1.tar.gz
$ cd redis-4.0.1
$ make
$ sudo make PREFIX=/usr/local/redis install # /usr/local/ is better ,edit /etc/envirenment change $PATH

启动server

后台启动

$ redis-server & # run redis-server in backgroud # second method change redis.conf

加载配置文件启动

redis-server /path/to/redis.conf

常见错误 shutdown failed

(error) ERR Errors trying to SHUTDOWN. Check logs.
Exception in thread "main" redis.clients.jedis.exceptions.JedisDataException: DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.
chmod -R 775 /usr/local/redis

常见设置更改

protected-mode

protected-mode  no#redis config file

bind address

# bind address 127.0.0.1  #redis config file

hadoop核心角色

hadoop 2 背景

在Hadoop 2.0.0之前,NameNode是HDFS集群中的单点故障(SPOF)。每个集群都有一个NameNode,如果该机器或进程变得不可用,则整个集群将不可用,直到NameNode重新启动或在单独的计算机上启动。
两个方面影响了HDFS集群的总体可用性:

在计算机事件(如机器崩溃)的情况下,集群将不可用,直到操作员重新启动NameNode。

NameNode机器上的计划维护事件(如软件或硬件升级)将导致集群停机的窗口。

HDFS高可用性功能通过提供在具有热备用的主动/被动配置中的同一集群中运行两个冗余名称节点的选项来解决上述问题。这允许在机器崩溃的情况下快速故障切换到新的NameNode,或者为了计划维护而对管理员启动的优化转换进行了优雅。

hadoop1 vs hadoop2

从Hadoop整体框架来说

Hadoop1.0即第一代Hadoop,由分布式存储系统HDFS和分布式计算框架MapReduce组成,
其中HDFS由一个NameNode和多个DateNode组成,MapReduce由一个JobTracker和多个TaskTracker组成。

Hadoop2.0即第二代Hadoop为克服Hadoop1.0中的不足:
针对Hadoop1.0单NameNode制约HDFS的扩展性问题,提出HDFS Federation,
它让多个NameNode分管不同的目录进而实现访问隔离和横向扩展,同时彻底解决了NameNode单点故障问题;
针对Hadoop1.0中的MapReduce在扩展性和多框架支持等方面的不足,它将JobTracker中的资源管理和作业控制分开,
分别由ResourceManager(负责所有应用程序的资源分配)和ApplicationMaster(负责管理一个应用程序)实现,
即引入了资源管理框架Yarn。
同时Yarn作为Hadoop2.0中的资源管理系统,
它是一个通用的资源管理模块,
可为各类应用程序进行资源管理和调度,
不仅限于MapReduce一种框架,也可以为其他框架使用,如Tez、Spark、Storm等

从MapReduce计算框架来讲

MapReduce1.0计算框架主要由三部分组成:编程模型、数据处理引擎和运行时环境。
它的基本编程模型是将问题抽象成Map和Reduce两个阶段,其中Map阶段将输入的数据解析成key/value,
迭代调用map()函数处理后,再以key/value的形式输出到本地目录,
Reduce阶段将key相同的value进行规约处理,并将最终结果写到HDFS上;

它的数据处理引擎由MapTask和ReduceTask组成,分别负责Map阶段逻辑和Reduce阶段的逻辑处理;
它的运行时环境由一个JobTracker和若干个TaskTracker两类服务组成,
其中JobTracker负责资源管理和所有作业的控制,
TaskTracker负责接收来自JobTracker的命令并执行它。

MapReducer2.0具有与MRv1相同的编程模型和数据处理引擎,唯一不同的是运行时环境。
MRv2是在MRv1基础上经加工之后,运行于资源管理框架Yarn之上的计算框架MapReduce。
它的运行时环境不再由JobTracker和TaskTracker等服务组成,
而是变为通用资源管理系统Yarn和作业控制进程ApplicationMaster,
其中Yarn负责资源管理的调度而ApplicationMaster负责作业的管理。

debian软件源配置

debian软件源配置

刚安装好的Debian默认还没有sudo功能。

1、安装sudo

# apt-get install sudo

2、修改 /etc/sudoers 文件属性为可写

# chmod +w /etc/sudoers

3、编辑 /etc/sudoers ,添加如下行

# vim /etc/sudoers
root ALL=(ALL) ALL
shuai ALL=(ALL) ALL

4、修改/etc/sudoers 文件属性为只读

# chmod -w /etc/sudoers

flume学习笔记

功能

自动监控日志变化,采集日志

核心组件

Source : 完成对日志数据的收集,分成transtion 和 event 打入到channel之中。
Channel : 主要提供一个队列的功能,对source提供中的数据进行简单的缓存。
Sink : 取出Channel中的数据,进行相应的存储文件系统,数据库,或者提交到远程服务器
架构图

学习核心:配置个性化-组件的类型

Flume(2)组件概述与列表

Flume 入门–几种不同的Sources

错误:找不到或无法加载主类 org.apache.flume.tools.GetJavaProperty

原因:hbase的配置

org/apache/flume/tools/GetJavaProperty

配置文件示例1-spooldir

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#  http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
# The configuration file needs to define the sources, 
# the channels and the sinks.
# Sources, channels and sinks are defined per agent, 
# in this case called 'agent'
agent1.sources = avro-source1
agent1.channels = ch1
agent1.sinks = logger-sink1
# sources
agent1.sources.avro-source1.type = spooldir
agent1.sources.avro-source1.channels = ch1
agent1.sources.avro-source1.spoolDir = /home/shuai/logs/
agent1.sources.avro-source1.fileHeader = true
agent1.sources.avro-source1.bind = 0.0.0.0
agent1.sources.avro-source1.port = 4141
# sink
agent1.sinks.logger-sink1.type = logger
agent1.sinks.logger-sink1.channel = ch1
# channel
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 1000
agent1.channels.ch1.transactionCapacity = 100

启动

参考教程启动flume

Flume安装,部署与应用案例

./flume-ng agent –conf ../conf –conf-file ../conf/flume-spool.conf –name agent -Dflume.root.logger=INFO,console

没有提示报错,但是没什么功能

参数 作用 列表
–conf 或 -c 指定配置文件夹,包含flume-env.sh和log4j的配置文件 –conf ../conf
–conf-file 或 -f 配置文件地址 –conf-file ../conf/flume.conf
–name 或 -n agent名称 –name a1
-z zookeeper连接字符串 -z zkhost:2181,zkhost1:2181
-p zookeeper中的存储路径前缀 -p /flume

分析日志

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/hadoop/apache-flume-1.8.0-bin/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/hadoop/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2017-10-15 21:44:02,380 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:62)] Configuration provider starting
2017-10-15 21:44:02,382 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:134)] Reloading configuration file:../conf/flume-spool.conf
2017-10-15 21:44:02,385 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:logger-sink1
2017-10-15 21:44:02,386 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:930)] Added sinks: logger-sink1 Agent: agent1
2017-10-15 21:44:02,386 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:logger-sink1
2017-10-15 21:44:02,392 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:140)] Post-validation flume configuration contains configuration for agents: [agent1]
2017-10-15 21:44:02,392 (conf-file-poller-0) [WARN - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:135)] No configuration found for this host:agent
2017-10-15 21:44:02,396 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:137)] Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }

agent的名字是agent1,改成agent1成功

./flume-ng agent --conf ../conf --conf-file ../conf/flume-spool.conf --name agent1 -Dflume.root.logger=INFO,console

配置文件示例2-spooldir

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#  http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.


# The configuration file needs to define the sources, 
# the channels and the sinks.
# Sources, channels and sinks are defined per agent, 
# in this case called 'agent'

agent.sources = seqGenSrc
agent.channels = memoryChannel
agent.sinks = loggerSink

# For each one of the sources, the type is defined
agent.sources.seqGenSrc.type = spooldir
agent.sources.seqGenSrc.spoolDir = /data/nginx/logs
# The channel can be defined as follows.
agent.sources.seqGenSrc.channels = memoryChannel

# Each sink's type must be defined
agent.sinks.loggerSink.type = logger

#Specify the channel the sink should use
agent.sinks.loggerSink.channel = memoryChannel

# Each channel's type is defined.
agent.channels.memoryChannel.type = memory

# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.memoryChannel.capacity = 1000
agent.channels.memoryChannel.transactionCapacity = 100

采用flume提供的默认模板,需要将source 的类型改成spooldir并添加监控的文件夹,此时的名称必须为agent才行
➜ bin ./flume-ng agent –conf ../conf –conf-file ../conf/flume-conf.properties –name agent -Dflume.root.logger=INFO,console

参考资源

  1. Flume(2)组件概述与列表
  2. Flume 入门–几种不同的Sources
  3. Flume安装,部署与应用案例

EasyUI学习笔记第六天

遇到的问题

  • datagrid变量化

jquery 的选择器可以是变量

1
2
3
4
5
6
7
8
9
10
var moduleDatagrid = '#pt_dg'
function reject(moduleDatagrid) {
// 最重要的一步
var dg = $(moduleDatagrid)
dg.datagrid('rejectChanges');
// 撤销之后清空全局信号量
editIndex = undefined;
editLength = 0;
submitNum = 0;
}

datagrid option中的data 依赖于ajax成功后返回的数据,不知道怎么提取

可以将其他部分代码先抽出来封装成一个对象,在ajax返回成功后,在success函数中

进行data属性的拼接

1
2
3
4
5
6
7
8
9
10
11
12
13
14
var options = {}
$.ajax({
// ****
success:function(data){
var dg_data = data
options.data = dg_data
$('#dg').datagrid(options)
}
// ***
})

代码

表格缓存式编辑

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
/**
*
*/
var editIndex;
var editLength;
var submitNum = 0;
//表单操作时,更改和保存表单共用一个保存函数,增加一个formMethod变量来区分
var urlPrefix = 'http://localhost:8080/ACM/manage_page';
//
//模块专属变量
function pagerFilter(data){
if (typeof data.length == 'number' && typeof data.splice == 'function'){ // 判断数据是否是数组
data = {
total: data.length,
rows: data
}
}
var dg = $(this);
var opts = dg.datagrid('options');
var pager = dg.datagrid('getPager');
pager.pagination({
// pageSize: 10,//每页显示的记录条数,默认为10
pageList: [5,10,15,20],//可以设置每页记录条数的列表
// displayMsg: '当前显示 {from} - {to} 条记录 共 {total} 条记录',
beforePageText: '第',//页数文本框前显示的汉字
afterPageText: '页 共 {pages} 页',
displayMsg: '当前显示 {from} - {to} 条记录 共 {total} 条记录',
onSelectPage:function(pageNum, pageSize){
opts.pageNumber = pageNum;
opts.pageSize = pageSize;
pager.pagination('refresh',{
pageNumber:pageNum,
pageSize:pageSize
});
dg.datagrid('loadData',data);
}
});
if (!data.originalRows){
data.originalRows = (data.rows);
}
var start = (opts.pageNumber-1)*parseInt(opts.pageSize);
var end = start + parseInt(opts.pageSize);
data.rows = (data.originalRows.slice(start, end));
return data;
}
function submitToDB(moduleDatagrid,moduleName) {
var dg = $(moduleDatagrid)
if(editIndex != undefined){alert("请先保存更改的内容")
return
}
dg.datagrid('loaded');
if (submitNum > 0) {
var submittedNum
var updateRows = dg.datagrid('getChanges', 'updated');
if (updateRows.length > 0) {
// submitNum += updateRows.length;
subSave(dg,moduleName,"update", updateRows);
alert('成功更新了' + updateRows.length + '行');
submitNum -= updateRows.length;
query(moduleDatagrid,moduleName);
}
var deleteRows = dg.datagrid('getChanges', 'deleted');
if (deleteRows.length > 0) {
// submitNum += deleteRows.length;
subSave(dg,moduleName,"delete", deleteRows);
alert('成功删除了' + deleteRows.length + '行');
submitNum -= deleteRows.length;
query(moduleDatagrid,moduleName);
}
var insertRows = dg.datagrid('getChanges', 'inserted');
// 增加时调用的数据是处理之后的数据,处理代码在各自模块的js文件中,统一用getAddRowsWithRightFormat()来处理数据
if (insertRows.length > 0){
var rightFormatRows = getAddRowsWithRightFormat(moduleDatagrid)
subSave(moduleDatagrid,moduleName,"add", rightFormatRows);
alert('成功增加了' + insertRows.length + '行');
submitNum -= insertRows.length;
query(moduleDatagrid,moduleName);
}
}
}
//删除和更改向后台提交
function subSave(moduleDatagrid,moduleName,method, rows) {
var dg = $(moduleDatagrid)
var msg;
$.each(rows, function(i, o) { // i:遍历的序号 o:当前遍历到的对象
o = rows[i]
var url = urlPrefix + moduleName + method + '.do';
$.ajax({
url : url,
type : "POST",
data : JSON.stringify(o),
success : function(data) {
if (msg) {
$.messager.alert('错误', '操作失败:' + msg, 'error');
dg.datagrid('loaded');
}
},
error:function(data){
alert("操作失败")
},
dataType : "json",
contentType : "application/json"
});
dg.datagrid('acceptChanges');
dg.datagrid('loaded');
dg.datagrid('reload');
});
}
$.fn.serializeObject = function()
{
var o = {};
var a = this.serializeArray();
$.each(a, function() {
if (o[this.name] !== undefined) {
if (!o[this.name].push) {
o[this.name] = [o[this.name]];
}
o[this.name].push(this.value || '');
} else {
o[this.name] = this.value || '';
}
});
return o;
}
//缓存操作添加一条记录
function insertRow(moduleDatagrid){
var dg= $(moduleDatagrid)
if(editIndex != undefined ){
dg.datagrid("endEdit",editIndex)
}
if(editIndex == undefined){
dg.datagrid("insertRow",{
index : 0,
row : {}
})
dg.datagrid("beginEdit",0)
editIndex = 0
}
}
//缓存操作移除行
function removeRows(moduleDatagrid){
var dg = $(moduleDatagrid)
var removeRows = dg.datagrid('getChecked')
if(removeRows.length>0){
$.each(removeRows,function(index,row){
index = dg.datagrid('getRowIndex', row)
dg.datagrid('deleteRow', index)
})
editIndex = undefined;
// 向外部传送移除的行数
submitNum += removeRows.length
}
else{
editIndex = undefined;
}
}
//保存编辑
function saveEditing(moduleDatagrid) {
var dg = $(moduleDatagrid)
if (editIndex != undefined) {
dg.datagrid('endEdit', editIndex);
editIndex = undefined;
dg.datagrid('reload')
}
var updateRows = dg.datagrid('getChanges', 'updated')
var insertRows = dg.datagrid('getChanges', 'inserted')
submitNum += updateRows.length
submitNum += insertRows.length
}
function getChanges() {
if (editIndex == undefined) {
alert(submitNum + '行被改变了!');
} else {
alert("请先保存编辑的行")
}
}
//撤销更改的内容
function reject(moduleDatagrid) {
var dg = $(moduleDatagrid)
dg.datagrid('rejectChanges');
// 撤销之后清空全局信号量
editIndex = undefined;
editLength = 0;
submitNum = 0;
}
function editRow(moduleDatagrid) {
var dg = $(moduleDatagrid)
if (editIndex != undefined)
return;
var row = dg.datagrid('getSelected');
if (row == undefined) {
$.messager.alert('提示', "请先单击选中要编辑的行", 'info');
return;
}
var index = dg.datagrid('getRowIndex', row);
dg.datagrid('beginEdit', index);
var editors = dg.datagrid('getEditors', index);
editIndex = index;
editLength = 1;
}

表单式操作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
/**
*
*/
//表单操作只有一个公用的信号量
var formMethod
var url
var urlPrefix = 'http://localhost:8080/ACM/manage_page';
//自定义jqeury 插件,将表单序列化成json
$.fn.serializeObject = function()
{
var o = {};
var a = this.serializeArray();
$.each(a, function() {
if (o[this.name] !== undefined) {
if (!o[this.name].push) {
o[this.name] = [o[this.name]];
}
o[this.name].push(this.value || '');
} else {
o[this.name] = this.value || '';
}
});
return o;
}
function pagerFilter(data){
if (typeof data.length == 'number' && typeof data.splice == 'function'){ // 判断数据是否是数组
data = {
total: data.length,
rows: data
}
}
var dg = $(this);
var opts = dg.datagrid('options');
var pager = dg.datagrid('getPager');
pager.pagination({
// pageSize: 10,//每页显示的记录条数,默认为10
pageList: [5,10,15,20],//可以设置每页记录条数的列表
// displayMsg: '当前显示 {from} - {to} 条记录 共 {total} 条记录',
beforePageText: '第',//页数文本框前显示的汉字
afterPageText: '页 共 {pages} 页',
displayMsg: '当前显示 {from} - {to} 条记录 共 {total} 条记录',
onSelectPage:function(pageNum, pageSize){
opts.pageNumber = pageNum;
opts.pageSize = pageSize;
pager.pagination('refresh',{
pageNumber:pageNum,
pageSize:pageSize
});
dg.datagrid('loadData',data);
}
});
if (!data.originalRows){
data.originalRows = (data.rows);
}
var start = (opts.pageNumber-1)*parseInt(opts.pageSize);
var end = start + parseInt(opts.pageSize);
data.rows = (data.originalRows.slice(start, end));
return data;
}
//直接删除记录
function deleteObjects(moduleDatagrid,moduleName) {
var dg = $(moduleDatagrid)
var rows = dg.datagrid('getChecked');
console.info(rows)
$.each(rows, function(index, object) {
object = rows[index]
$.ajax({
url : urlPrefix + moduleName+'delete.do',
type : "POST",
data : JSON.stringify(object),
success : function(data) {
alert("删除成功")
query(moduleDatagrid,moduleName);
},
error : function() {
alert("删除失败");
},
dataType : "json",
contentType : "application/json;charset=UTF-8"
});
$('#dg').datagrid('loaded');
$('#dg').datagrid('reload');
})
}
//增加表单
function newObjectForm(moduleDialog,moduleForm,moduleName){
var dlg = $(moduleDialog)
var fm = $(moduleForm)
dlg.dialog('open').dialog('setTitle','新增记录')
dlg.form('clear')
// 每个模块的form的外键初始化不一样,应该放在模块的js中
initFormCombobox()
url = urlPrefix + moduleName+'add.do'
formMethod = "add"
}
//编辑表单
function editObjectForm(moduleDatagrid,moduleDialog,moduleForm,moduleDialogButtons,moduleName){
var dg = $(moduleDatagrid)
var dlg = $(moduleDialog)
var fm = $(moduleForm)
var dlg_buttons = $(moduleDialogButtons)
var row = dg.datagrid('getSelected');
if (row){
dlg.dialog('open').dialog('setTitle','编辑记录');
// initFormCombobox函数放在表单加载之前,要不然无法读取关联关系,因为关联关系一栏被重设了
initFormCombobox()
dlg_buttons.attr('style','display : block')
fm.form('load',row);
url = urlPrefix + moduleName+'update.do'
console.log("保存之前的序列化")
var ptjson = fm.serializeObject()
console.log(ptjson)
console.log(url)
formMethod ="update"
}
}
function saveObjectForm(moduleForm){
var fm = $(moduleForm)
if(formMethod == undefined){return}
var fmJson = fm.serializeObject()
console.log("保存之后的序列化")
console.log(fmJson)
if(formMethod == "add"){
// 调用正确的json格式,每个模块的json格式处理是不一样的
fmJson = getFmJsonWithRightFormat(moduleForm)
}
console.log(url)
$.ajax({
url : url,
type : "POST",
data : JSON.stringify(fmJson),
dataType : "json",
contentType : "application/json",
success : function(data) {
$('#dlg').dialog('close')
alert("数据提交成功")
// 操作成功或者失败都要清空清空全局信号量,以免其他模块用
url = undefined
formMethod = undefined
query(moduleDatagrid,moduleName);
},
error : function(data){
alert("数据提交失败")
url = undefined
formMethod = undefined
}
})
$('#dg').datagrid('loaded');
$('#dg').datagrid('reload');
}
function viewObjectForm(moduleDatagrid,moduleDialog,moduleForm){
var dg = $(moduleDatagrid)
var dlg = $(moduleDialog)
var fm = $(moduleForm)
var dlg_buttons = $(moduleDialogButtons)
var row = dg.datagrid('getSelected');
if (row){
dlg.dialog('open').dialog('setTitle','查看详细信息');
dlg_buttons.attr('style','display : none')
initFormCombobox()
fm.form('load',row);
}
}
function cancelOprateObejctForm(moduleDialog){
var dlg =$(moduleDialog)
var fm = $(moduleForm)
dlg.form('clear')
dlg.dialog('close')
url = undefined
formMethod = undefined
// 清空信号量
}

EasyUI学习笔记第五天

要解决的问题

  • easyUI分页

  • 根据从表来模糊查询主表中的记录

解决问题的思路

easyUI分页

easyUI 分页时需要的标准数据
[total:””,rows:{}]

根据从表来模糊查询主表中的记录

mybatis 只能接收一个参数

前台通过输入字表中的字段来模糊查询主表中的内容

平台客户表(ptkhb) 中存储了客户表(khb)的id

根据客户的名称来模糊查询平台客户表中的内容

把查询条件封装成一个复杂的json

前台将查询条件封装成一个与实体模型关系匹配的json对象

1
2
3
4
5
6
var khmc = $('query_name')
var kh ={}
kh.khmc = khmc
var queryCondition = {}
queryCondition.kh = kh
// 关联查询时模型中已经进行关联

Mybatis ognl表达式体现出了强大的作用

1
2
<if test="kh.getKhmc()!=null">
and b.khmc like '%' #{kh.khmc,jdbcType=VARCHAR}

代码

分页的实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
function pagerFilter(data){
if (typeof data.length == 'number' && typeof data.splice == 'function'){ // 判断数据是否是数组
data = {
total: data.length,
rows: data
}
}
var dg = $(this);
var opts = dg.datagrid('options');
var pager = dg.datagrid('getPager');
pager.pagination({
// pageSize: 10,//每页显示的记录条数,默认为10
pageList: [5,10,15,20],//可以设置每页记录条数的列表
// displayMsg: '当前显示 {from} - {to} 条记录 共 {total} 条记录',
beforePageText: '第',//页数文本框前显示的汉字
afterPageText: '页 共 {pages} 页',
displayMsg: '当前显示 {from} - {to} 条记录 共 {total} 条记录',
onSelectPage:function(pageNum, pageSize){
opts.pageNumber = pageNum;
opts.pageSize = pageSize;
pager.pagination('refresh',{
pageNumber:pageNum,
pageSize:pageSize
});
dg.datagrid('loadData',data);
}
});
if (!data.originalRows){
data.originalRows = (data.rows);
}
var start = (opts.pageNumber-1)*parseInt(opts.pageSize);
var end = start + parseInt(opts.pageSize);
data.rows = (data.originalRows.slice(start, end));
return data;
}
$.ajax({
type:"post",
url:urlPrefix + '/pt/query.do',
dataType:"json",
contentType:"application/json;charset=UTF-8",
data:JSON.stringify(queryCondition),
success: function(data){
$('#dg').datagrid({loadFilter:pagerFilter}).datagrid('loadData', data);
$('#dg').datagrid(options)
}
error:function(){
}
})

前后台数据交互

背景

最近在做前后台的整合,我负责开发后台,完全没有前台,怎么测试呢?

编码硬耦合

PHP,JSP,ASP那种脚本里面嵌套着HTML,有时又用到各种额外的标签库
各种代码代码属于编码硬耦合,涉及的东西比较杂,前后端分离效果很差,给调试维护
都会带来不小的麻烦。

RESTful和前后端更加彻底的分离

今年2016年,人家2000年就提出来了。
维基百科REST

阮一峰 理解RESTful架构

写SpringMVC控制器的时候,用的是RESTful风格
这两天一直痛URL来测试后台功能,试想如果前后台的基本方式只有URL该是多方便的事情。
URL传少量的参数还可以,测试添加和更改对象的时候遇到了问题,怎么传递一个对象或者对象列表呢?
RESTful 能够在当前这个环境下最大化的分离前后端。依赖少,只需要关注自己

交互思路:

前台向后台传递数据:如果只有少数参数,通过get请求附加一些参数就可以了。如果数据量比较大,先将json序列化,后台再反序列化,用post请求传递数据
后台向前台传递数据:json对象,由JavaScript来负责解析和填充数据。

RESTful风格的盛行,我感觉跟ajax有莫大的关系,今天用ajax进行交互真是爽。jQuery封装的ajax更是只需要关注URL,数据部分。

##前台是如何获取请求的执行状态呢?
换句话话说,URL弄好了,数据也准备好了,但是我怎么知道我的请求成功没成功呢?状态码
第一次认识到状态码的重要性。
ajax里面维护了对状态码的跟踪

关于URL编码

阮一峰 关于URL编码

URL参数中文乱码

http://localhost:8080/ACM/pt/query.do?id=1&ptmc=估值
后台设置了Request编码为UTF-8仍然不行

1
2
3
4
5
6
7
8
@RequestMapping(value= "pt/query.do",method=RequestMethod.GET)
public @ResponseBody List<Pt> getPtbyName(HttpServletRequest request) throws UnsupportedEncodingException{
request.setCharacterEncoding("UTF-8");
String ptmc =request.getParameter("ptmc").trim();
ptmc = new String(ptmc.getBytes("iso-8859-1"),"utf-8");
System.out.println(ptmc);
}

不管是什么乱码,我先用一种确定的编码来读,然后再重新转码。

所谓乱码不过是字符流解析错了,ISO-8859-1 换成GB18030还是乱码,这个字符集应该还是比较特殊的

ajax和SrpingMVC交互

1
2
3
4
5
6
7
8
9
@RequestMapping(value="pt/add.do")
public @ResponseBody String addPt(@RequestBody Pt record){
int flag = this.ptService.addPt(record);
if (flag==1) {
return "success";
} else {
return "error";
}
}

@RequestBody 可以将接收的数据(不是json而是序列化的json)和模型进行关联,所以如果字段和model字段不匹配
是无法成功的。

@ResponseBody 直接将返回的对象以json的形式发给前台。

有的资料里面,要在@RequestMapping 里面加上相应的解析规则 content=”application/json”,实验的结果是加不加都可以

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<script src="./jquery-1.11.3.min.js"></script>
<script type="text/javascript">
$(document).ready(function(){
$.ajax({
type:"POST",
url:"http://localhost:8080/ACM/manage_page/pt/add.do",
dataType:"json",
contentType:"application/json;charset=UTF-8",
data:JSON.stringify({"qxyId":"1","ptmc":"期货资管云688","jcpt":"阿里云"}),
success:function(data){
alert("添加成功")
} ,
error:function(data){
alert("添加失败")
}
});
});
</script>

注意: @RequestBody 接收的序列化的json,而不是直接json

跨域访问错误

本地上制作了静态页面,没有放到web服务器上面,请求时错的

本地文件,浏览器URL file:///C:/Users/siys16877/Desktop/front-end/index.html

状态码正常

控制台报错

控制台报错